Computer

How We Test Desktop PCs

How We Test Desktops

The process of reviewing desktop computers at PCMag.com carries on core traditions that date back to the establishment of PC Labs in 1984: We compare each system to others in its category on the basis of price, features, design, and in-house performance tests.

To evaluate performance, we use a suite of software-based benchmark tests and real-world applications and games, carefully chosen to highlight the strengths and weaknesses in the tested PC’s mix of components. That evaluation ranges from the processor and the memory subsystem to the machine’s storage hardware and graphics silicon.

In some cases, we make use of standardized tests created by established benchmark developers. We’ve also created our own tests, where needed. We also regularly evaluate new benchmark solutions as they hit the market, and overhaul our testing procedures as needed to ensure that we can accurately reflect the effects of the latest technologies.

NZXT BLD Starter Plus PC

Our desktop PC testing breaks down into two rough classes of testing: productivity testing and graphics testing, with some supplemental tests for specialized kinds of systems. Here’s a breakdown of each.


Productivity Testing

PCMark 10

Our first task is evaluating a computer’s everyday productivity performance using UL’s PCMark 10 benchmark, which simulates real-world productivity and content-creation workflows. (In 2014, UL, or Underwriters Labs, acquired Futuremark, the maker of the long-running PCMark and 3DMark benchmarks.)

We use PCMark 10 to assess overall performance for office-centric tasks such as word processing, spreadsheet jockeying, web browsing, and videoconferencing. The test generates a proprietary numeric score; higher numbers are better, and the scores are meaningful primarily when compared to one another.

PCMark 10

We run the main test suite supplied with the software, not the Express or Extended version. Note that all else being equal, a higher screen resolution will suppress a system’s performance on PCMark 10. (The more pixels to push, the more resources required.) As a result, we run all desktop PCs on this test at 1,920 by 1,080 pixels (1080p) if they do not have a built-in screen. If this is an all-in-one (AIO) desktop with a built-in screen, we run the test at the screen’s native resolution, which may be higher or lower than 1080p.


PCMark 8 Storage

We then assess the speed of the PC’s main boot drive using another UL benchmark, PCMark 8. This test suite has a dedicated PCMark 8 Storage subtest that reports a proprietary numeric score, like so…

PCMark 8

As with PCMark 10, higher numbers are better. The results from systems with cutting-edge solid-state drives (SSDs) tend to cluster together closely on this test.


Cinebench R15

Next in line is Maxon’s CPU-crunching Cinebench R15 test. We run this test at the All Cores setting. Derived from Maxon’s Cinema 4D modeling and rendering software, this test is a CPU horsepower test. It is fully threaded to make use of all available processor cores and threads. Think of it as an all-out processor deadlift.

Cinebench R15

Cinebench stresses the CPU rather than the GPU to render a complex image. The result is a proprietary score indicating a PC’s suitability for processor-intensive workloads, when used with software that is fully threaded.


Handbrake 1.1.1

Cinebench is often a good predictor of our Handbrake video-editing trial. This is another tough, threaded workout that’s highly CPU-dependent and scales well as you add cores and threads.

Handbrake

In this test, we put a stopwatch on test systems as they transcode a standard 12-minute clip of 4K video (the open-source Blender demo short movie Tears of Steel) to a 1080p MP4 file. We use the Fast 1080p30 preset in version 1.1.1 of the Handbrake app for this timed test. Lower results (i.e., faster times) are better.


Adobe Photoshop CC Photo Editing Test

Our final productivity test is a custom Adobe Photoshop image-editing benchmark. Using an early 2018 release of the Creative Cloud version of Photoshop, we apply a series of complex filters and effects (Dust, Watercolor, Stained Glass, Mosaic Tiles, Extrude, and multiple blur effects) to a PCMag-standard JPEG image. (We use a script executed via an Actions file of our own making.) We time each operation and, at the end, add up the total execution time. As with Handbrake, lower times are better here.

Photoshop

The Photoshop test stresses CPU, storage subsystem, and RAM, but it can also take advantage of most GPUs to speed up the process of applying filters. Systems with powerful graphics cards may see a boost from that.


Graphics Performance

Judging graphics performance requires using tests that are challenging to every system yet yield meaningful comparisons across the field. We use some benchmarks that report proprietary scores and others that measure frames per second (fps), the frequency at which the graphics hardware renders frames in a sequence, which translates to how smooth the scene looks in motion.


Synthetic Tests: 3DMark and Superposition

The first graphics test we employ is UL’s 3DMark. The 3DMark suite comprises a host of different subtests that measure relative graphics muscle by rendering sequences of highly detailed, gaming-style 3D graphics. Many of these tests emphasize particles and lighting.

We run two different 3DMark subtests, Sky Diver and Fire Strike, which are suited to different types of systems. Both are DirectX 11 benchmarks, but Sky Diver is suited to laptops and midrange PCs, while Fire Strike is more demanding and made for high-end PCs to strut their stuff. The results are proprietary scores.

3DMark Fire Strike

Also in our graphics mix is another synthetic graphics test, this time from Unigine. Like 3DMark, the Superposition test renders and pans through a detailed 3D scene and measures how the system copes. In this case, the rendering action happens in the company’s eponymous Unigine engine, offering a different 3D workload scenario than 3DMark. This provides a second opinion on the machine’s graphical prowess.

Unigine Superposition

We present two Superposition results, run at the 720p Low and 1080p High presets. The scores are reported in frames per second, higher frame rates being better. For lower-end PCs, maintaining at least 30fps is the realistic target, while more powerful computers should ideally attain at least 60fps at the test resolution.


Real-World Gaming Tests

The synthetic tests above are helpful for measuring general 3D graphics aptitude, but it’s hard to beat full retail video games for judging gaming performance. Far Cry 5 and Rise of the Tomb Raider are both modern, high-fidelity titles with built-in benchmarks that illustrate how a system handles real-world video games at various settings.

Far Cry 5 Start

Far Cry 5 Results

These games are run on both the moderate and the maximum graphics-quality presets in the benchmarking utility. (Those presets are Normal and Ultra for Far Cry 5, Medium and Very High for Rise of the Tomb Raider.) We test by default at 1080p, and may test at high resolutions like 3,840 by 2,160 pixels (4K) if the system configuration warrants it, such as in systems with SLI or CrossFire multiple-video-card configurations. (If this is an AIO PC and the native display resolution is higher or lower, we also test at the native resolution.)

ROTR Benchmark

ROTR Results

These results are also provided in frames per second. Far Cry 5 is a DirectX 11-based game, while Rise of the Tomb Raider can be flipped to DirectX 12 mode, which we do for this benchmark.


Special Cases: macOS Systems and Workstations

We don’t run all of the above tests on every computer. We only run Far Cry 5 and Rise of the Tomb Raider on systems that are specifically designed for gaming, equipped with a dedicated graphics card or cards. And we don’t use PCMark, 3DMark, or Superposition for testing Apple machines, since these tests have no macOS version. To evaluate some specialized subsets of desktops, such as workstations and Chrome OS machines, we supplement our standard tests.

Chrome OS

Chrome OS is rare in desktops (“Chromeboxes”) nowadays, and none of the above tests is compatible with Chrome OS. We therefore run the benchmarks CrXPRT and WebXPRT, from Principled Technologies, to help us make comparisons among Chrome machines. These are single-click tests without settings to tweak, and they report back proprietary scores that are meaningful only relative to one another.

Desktop Workstations

With workstation desktops, we run all of the above tests and supplement them with a few workstation-specific measures. These specialized tests include the multimedia rendering tool POV-Ray (for a ray-tracing simulation). We also run the SPECviewperf suite, loading three “viewsets” for the apps Creo, Maya, and SolidWorks, to gauge how the workstation handles the manipulation of relevant files in these three seminal workstation programs. The POV-Ray results are reported as time to completion of the test task, and the SPECviewperf results are reported in frames per second.

Let’s block ads! (Why?)

PCMag.com Latest Articles

Leave a Reply

Your email address will not be published. Required fields are marked *