Off-topic post: My new computer

While I have been working on my projects, including Innocence Seekers, in the last few weeks, this post will go over something different. Over the past few months, I have been gathering parts to build a replacement for my aging machine, and in the past few days, the last parts I needed arrived.

The main reason I haven’t been focusing on game development in the past few months is because my computer is showing its age. While it was built in 2015, it was during the time I still attended university, so I didn’t have much in my budget for a decent computer. It had an AMD A8-7650K APU (a quad-core Kaveri chip in 28 nm), paired with a Radeon R7 240 with 2 GB of DDR3 (with the intention of using dual graphics, although I ended up not using it much), and 8 GB of DDR3 1600 MHz CL9 memory. Over the past year or two, I’ve made a few upgrades to this system, first replacing the original graphics card with an RX 560 2 GB, then an RX 580 4 GB (as I had some problems with the 560), and now has a Vega 64 (which will be used for my new computer). Also, I’ve increased the memory to 32 GB (still 1600 MHz, although this configuration is CL10).

Needless to say, in 2019, this system is inadequate. My CPU upgrade pathway was hindered from the beginning (the FM2+ socket isn’t well known for high-performance processors), which was not helped by the fact that when I purchased this machine I had some grievances with Intel (namely, I was a bit angry that not all Intel processors supported hardware virtualisation, and the Core 2 Duo laptop I had before then did not support it). While I have upgraded the graphics card, it should be obvious that the processor is the bottleneck in this configuration. While this processor is still viable for basic web browsing, e-mail and light gaming, I would not use it in any new computer. If I were to take out the graphics card and half the memory, I may only ask for about $350 in total for the system, with the majority of the cost lying with the SSD (500 GB) and HDD (6 TB). And if I were to upgrade the motherboard to put in a new CPU, I’d need a new Windows license (the original was OEM), and I might as well just build a new computer.

As such, I have slowly gathered the parts for my new computer. First, I focused on the stuff I could test immediately. This included the new SSD, a case, fans, and a power supply (and in the mean time, I got myself a Blu-Ray writer, just in case 😉). By the end of May, I had bought the memory that will be used in my computer, and I only needed a CPU and a motherboard. At that point, I decided to wait until Ryzen 3rd generation was released.

And that brings us to now. I ordered the final parts literally within seconds of launch on Sunday, and on Tuesday, they arrived. After several hours of hard work (and one dead fan controller, not that I needed it anyway, and some time left for anger management, thanks to a screw repeatedly falling into the I/O shroud, forcing me to take out the motherboard and tip it so the screw falls out), I managed to complete my system. I’ve conducted tests on and off for the past few days, and I expect the system to be ready by Friday.

The list of parts for my new system is as follows:

  • CPU: AMD Ryzen 7 3700X (8-core, 16-thread, 3.6 GHz base, 4.4 GHz boost, AM4 socket, Matisse)
  • Motherboard: ASUS ROG Crosshair VIII Hero (Wi-Fi) (AM4 socket, X570 chipset)
  • Memory: 2 × 16 GB DDR4 3200 MHz CL16 (Team Group T-Force Delta RGB, total 32 GB)
  • Case: Fractal Design Define R6 TG USB-C Edition (black, in standard layout)
  • Power supply: Corsair HX1000 (fully modular, in multi-rail configuration)
  • Graphics card: AMD Radeon RX Vega 64 8 GB (ASUS ROG Strix)
  • SSD: 1 × WD Black SN750 500 GB (NVMe, PCIe 3.0×4)
  • HDD: 1 × WD Blue 6 TB

The following are pictures of my motherboard and CPU:

Motherboard picture

CPU in motherboard close-up

That will be all for now.

Edit (2019-07-15): Oh, and if you’re wondering why I’ve gone for an all-AMD system:

  • Right now, Intel only has a slight edge in gaming. In most other workloads, Ryzen 3rd gen handily beats Intel’s offerings at similar price points. And I generally consider gaming to be a “secondary” purpose for my machines, and look at people who buy/build systems only for gaming in a dim light. Although, just to be clear, I don’t hate Intel right now (I’ve largely forgiven them for their past transgressions; socket changes and forcing users to buy new motherboards notwithstanding 😄).
  • Right now, I hate Nvidia. And not because their newest generation of cards are, in my opinion, overpriced. Having been a Linux user for quite a while, I’m more than happy to support companies that actually contribute a significant amount to Linux driver development, and by extension the open source community, like Intel and AMD. If you are wondering why Nvidia got this from Linus Torvalds, it’s because Nvidia outright refuses to contribute to open source driver development within the Linux kernel tree, instead largely focusing on their own proprietary driver which isn’t guaranteed to run on the latest version of Linux (Edit [2019-07-16]: read this article for an example; to clarify, the Linux kernel developers do not care at all about whether their patches will break out-of-tree drivers, such as the proprietary Nvidia driver). In addition, Nvidia is more than happy to screw over the competition by paying developers to optimise for their cards (to the detriment of performance on AMD cards) and at one point even tried to get AIBs to align their already-established gaming brands exclusively with their cards. Finally, there is the vendor lock-in, where organisations are stuck with Nvidia simply because the software they need only uses Nvidia’s proprietary SDKs, rather than an open standard such as OpenCL.

Edit (2019-07-19): Benchmark scores:

  • Cinebench R15: 2,139 (204 single, ×10.49 multi)
  • Cinebench R20: 4,726 (494 single, ×9.57 multi)
  • 3DMark Time Spy: 7,575 (7,251 graphics [50.34, 39.46 FPS], 10,153 CPU [34.11 FPS])
  • 3DMark Fire Strike: 19,776 (22,822 graphics [111.84, 89.17 FPS], 23,015 physics [73.06 FPS], 8,941 combined [41.59 FPS])
  • PCMark 10: 6,482
    • Essentials: 9,343
      • App startup: 8,525 (Writer: 1.249/9.020 s, GIMP: 2.077/3.741 s, Chromium: 0.490/1.570 s, Firefox: 0.975/2.146 s)
      • Video conferencing: 10,507 (playback CPU: 30.02/29.93 FPS, face detect CPU: 90.75/22.90 FPS, playback OpenCL: 29.94/29.96 FPS, encode OpenCL: 29.69/25.76 FPS, face detect OpenCL: 287.95/124.87 FPS)
      • Web browsing: 9,106 (SM load: 0.107 s, SM update: 0.102 s, shop view: 60.00 FPS, shop load 3D: 1.260 s, shop animate 3D: 300.00 FPS, map info update: 0.095 s, map zoom: 0.022 s, video: [1080p H.264: 30.00 FPS, 4K H.264: 30.00 FPS, 1080p VP9: 30.00 FPS, 4K VP9: 30.00 FPS])
    • Productivity: 8,536
      • Spreadsheets: 11,509 (open: 1.254 s, copy and compute: 1.677/2.749 s, copy plain: 2.301 s, copy formulas: 0.907 s, edit: 0.835 s, save: 1.408 s, recalculate: [building design CPU: 0.471 s, stock history CPU: 0.839 s, Monte Carlo OpenCL: 1.059 s, energy market OpenCL: 0.429 s])
      • Writing: 6,331 (load: 1.637 s, save: 0.969 s, add pictures: 0.565 s, copy and paste: 0.120 s, cut and paste: 0.316 s)
    • Digital content creation: 9,267
      • Photo editing: 13,625 (colour adjust: 2.286 s, unsharp mask: 2.130/1.426 s, noise add: 0.230 s, Gaussian blur: 0.578 s, local contrast: 1.761 s, wavelet denoise: 0.626 s, thumbnail load: 0.139 s, save PNG: 13.712 s, save JPEG: 1.696 s, batch transform: 4.090 s)
      • Rendering and visualisation: 12,447 (graphics: 261.07 FPS, ray tracing: 20.995 s)
      • Video editing: 4,693 (on the go: 24.62 FPS, sharpening OpenCL: 282.00 FPS, sharpening CPU: 53.00 FPS, deshaking CPU: 19.00 FPS, deshaking OpenCL: 96.00 FPS)
  • Blender BMW (CPU): 4:02.10
  • Blender BMW (GPU): 1:48.66
  • Blender Barcelona (CPU): 12:23.84
  • Blender Barcelona (GPU): 5:25.03
  • Blender Classroom (CPU): 12:53.04
  • Blender Gooseberry (CPU): 31:20.00

Other notes about my system:

  • I did not go for Samsung B-die memory. A decent kit of B-die can go for as much as 80 per cent the cost of an SK Hynix CJR kit of the same capacity and speed, and in any case, none of my local retailers had a 2×16GB kit with B-die (they were mainly 2×8GB, and I wanted 32 GB). There are now 2×32GB and 4×32GB kits available, and I will consider them if they become available in 3200 MHz or higher.
  • I ultimately plan to upgrade to a Ryzen 9 3950X in October or November.
  • My decision to go with an X570 motherboard is more to do with futureproofing than anything. Sometime next year, I plan to upgrade my graphics card, and possibly my SSD.

Finally, a word of warning for those intending to dive into the new processors. If you are planning to buy these new processors, the higher end ones may or may not be in stock at your local retailer; the launch has been hampered by availability issues from day 1 (and even twelve days from the launch, some people still have yet to receive their processors). Additionally, if you’re planning to pair one of these new processors with an older generation motherboard (X470, B450, X370, B350; of the new processors, only the Picasso APUs officially support A320, although there is unofficial support on some A320 boards for Matisse), you need the latest BIOS update (you want one with an AGESA version containing “Combo”; whether your motherboard manufacturer has published them is another matter entirely, please check their website for details). Another thing to mention is that as of now, you will not be able to reach the advertised boost clocks (my testing showed that the multiplier only reached ×43.25 ×43.75 (corresponding to a frequency of 4,367 MHz given an FSB frequency of 99.8 MHz), instead of the maximum of ×44), and you need AGESA Combo-1.0.0.3ab or later to get the full performance from your processor. The last thing I will say here is that if you play Destiny 2 or run a recent version of Linux with a recent version of systemd, you will run into breaking problems; the game will not load, and Linux systems running systemd will spew a wall of “failed” messages and fail to boot properly. While some Linux distributions have implemented the workaround (which was actually added in May, to solve a related Ryzen problem regarding waking from standby), the only fix is to wait until your motherboard manufacturer releases a BIOS update implementing AGESA Combo-1.0.0.3aba or later (and to further add insult to injury, this AGESA version actually had an unrelated regression regarding PCIe, causing the motherboard manufacturers to delay the release of their updated BIOSes).

If you could not be bothered to read the above paragraph (yes, I know it’s long), then you might as well wait.

P.S. I’ve noticed a number of people, before launch, complain about the fact that the ASUS ROG Crosshair VIII Impact is Mini-DTX (which is basically a two expansion slot version of Mini-ITX), and how it won’t fit in their cases. As it turned out, it would not have fully supported such cases anyway even if it were Mini-ITX. One limitation to keep in mind is that the PCIe riser cables available as of writing cannot be used with any X570 motherboard without BIOS tweaks; the board will attempt to negotiate a Gen 4 connection, and run into issues immediately because the (passive) riser cables are only specced for Gen 3, and run into signal integrity issues with Gen 4. The end result is that whatever is connected to the other end, regardless of whether it supports Gen 4 or not, will not be detected, or will run into problems during POST.
In comparison, the C8I will easily fit into a Mini-ITX case that is designed with graphics cards in mind and that does not rely on riser cables to connect the graphics card, simply because the size of modern graphics cards require that such cases support two expansion slots.

Edit (2019-07-26): Recently, Userbenchmark changed their overall algorithm regarding CPU performance to favour single-core performance, and this comes at a time when even games are moving towards optimisation for six or eight cores. Personally, I never use that site, and highly recommend that people not use it (for gaming benchmarks, I typically recommend 3DMark), but the change predictably drew the ire of AMD customers, as the change favoured Intel chips (Intel customers are also angry, but that is because it also favours the lower-tier chips).
Some would make the argument that older games benefit from single-core performance, but older games were also designed for slower chips, and would easily get 240+ FPS at 1080p on modern processors (provided there is no GPU bottleneck). Nowadays, the only games that are single-core-optimised are a few indie games, but “indie” and “CPU-intensive” to me don’t really go together. Even if games are single-core-optimised, the fact is that there are lots of people who also do things other than game at the same time (like streaming, or watching videos). I’ve even heard of antivirus programs using up a thread or two, causing in-game performance to drop significantly on dual-core processors. (On a side note, I heard that StarCraft 2 lost popularity precisely because it was single-core-optimised, but I can’t verify this).
The change also makes Userbenchmark overall rankings utterly useless for any non-gaming workload beyond basic desktop use. While games have traditionally lagged behind in multi-threaded programming adoption (which is likely to change with the PS5 and the upcoming new Xbox, both of which feature eight-core, sixteen-thread processors), many productivity workloads benefit immensely from multiple cores. In particular, software development relies on compilation of source code, which, depending on the number of source files within a program, is an “embarrassingly parallel” operation, meaning it scales almost perfectly with the number of cores (note that compilers are typically single-threaded, but most build systems run multiple instances of compilers in parallel). The number of virtual machines one can run also scales perfectly with number of cores, although in this case RAM is more of a limitation.
In any case, if you are in the market for a new processor, make sure you don’t rely on the results of a single benchmark when choosing (Cinebench, for example, doesn’t reflect gaming performance). Take a look at multiple benchmarks, and read and/or watch as many reviews as you can, to get the entire picture. Know your use case, and determine your budget, so you can choose the best value processor for your desired purpose.

On an unrelated note, I managed to improve my Cinebench R20 multicore score to 4,800. There’s still some performance left on the table (I’ve seen people get 4,900+ with the 3700X), so I’ll see what I can do.

Edit (2019-07-29): Regarding the Destiny 2 issue, there is a beta chipset driver available, which works around the issue. The SHA256 checksum of the file is below:
ec7c6e03245808c15009675174cd7d6029258c433dd4433e1c0d300a70afef9e *AMD_Chipset_Drivers_v1.07.26.0551.zip
There is an announcement expected tomorrow regarding the issues to be published on Reddit.

Edit (2019-07-31): Overclocked my memory to 3600 MHz (CL16). Hoping it is stable. I won’t bother with overclocking the processor; with the third generation Ryzen processors, it’s not worth it (unless you’re doing something with liquid nitrogen or whatever, in which case it’s still not worth it, because who has a ton of liquid nitrogen just lying around). What I do plan to do is replace the stock cooler; the Wraith Prism is more than adequate for the 65W TDP Ryzen 7 3700X, but the high-pitched noise I hear when the fan ramps up annoys me a little bit (and if I intend to get a 3950X, I would need a beefy cooler anyway). Since I don’t watercool my computers (I don’t like water in computers), I’m looking at a Noctua cooler (maybe the NH-D15).

Also, I think everyone needs to read this. And I mean EVERYONE, no exceptions (if you don’t agree, say so in the comments for the linked post, but be civil about it; there are some rebuttals already there). There are a lot of myths, misconceptions and FUD concerning the voltages and temperatures the new Ryzen chips reach, most propagated by those who cannot understand that these chips work differently to any other CPU in terms of boost, voltage and temperature (if anything, they resemble GPU behaviour). Succinctly, Zen 2 is a high-temperature, high-voltage architecture by design, and 1.5 volts is normal given enough thermal headroom.

  • The issue of fans ramping up for only a few seconds at a time every now and then is most likely a temperature monitoring issue. I don’t know how the motherboard manufacturers interpret the temperature sensor readings, but many monitoring software only report the maximum value from the temperature sensors. If that is indeed the case with the BIOS, then the BIOS may be ramping up fans unnecessarily due to erroneous information. A possible workaround is to set a custom fan curve such that the fans do not ramp up until the CPU reaches a certain temperature, but this may instead lead to high (70 °C!) idle temperatures.
  • Regarding the apparent failure to reach advertised boost speeds, I do not know if this is an inherent issue with the CPUs, BIOS issues, or a limitation of monitoring software. It is indeed possible that the chips reach the maximum boost frequency for a single core, but only for periods lasting microseconds, far too short to be reliably measured by monitoring software.
  • The 1.325 volt value thrown around… I don’t know what to make of it. No official source mentioned this, and the value came from a single YouTube video. While this indeed may be a safe limit for manual overclocking, the CPUs should, if running stock, be intelligent enough to regulate their own voltage without frying themselves, and if the CPU says that it can run a core at 1.5 volts, then it’s fine to run at 1.5 volts. That said, don’t manually set a voltage above 1.325 (at stock, the CPUs will undervolt slightly from the 1.4 ~ 1.5 volt peak if they detect high current draw), unless you’re doing extreme overclocking, in which case killing the CPU is a risk I’m assuming you’re willing to take (I know that some extreme overclockers have gone through several CPUs and motherboards in a single session alone, all for the sake of getting that prize money just so they can actually recoup potential losses).
  • Finally, the “idle” state of a computer is not true idle. True idle is when the CPU is not executing any instructions at all, which is an impossible ideal for any normally functioning general-purpose computer in the S0 state (S3 is standby, S4 is hibernate, S5 is soft-off). Even if you’re at the Windows desktop, with no programs open, various background processes (such as your antivirus, or Windows Update) may still be running. Even if you’re booted into Linux or *BSD in single user mode (in which case the only process running if you’re at the shell is /bin/sh, in other words, the shell), other hardware could send interrupts and cause the CPU to execute instructions in order to handle the interrupt (as a matter of fact, if your cursor is blinking, the computer is not in true idle). True idle can only be achieved by putting your computer into a sleep state (i.e. something lower than S1 or S2), or causing it to hang outright (i.e. BSOD or kernel panic). The Pentium F00F bug is also a good way to achieve true idle 😄.

Edit (2019-08-02): While installing my new cooler, I finally found the Q/P switch on my graphics card. Switched it to P, and I should see some improvements.

On the other hand, given the height of my memory, the second fan barely fits in the case, just as the graphics card barely fits in the standard layout. I knew I was cutting it close, as I had only 20 millimetres of additional room inside my case (in addition to the minimum 165 millimetres required by the NH-D15), and the memory modules themselves were roughly around five centimetres high (the second fan is fully compatible with 32 millimetre height memory modules).

Edit (2019-08-30): While I would’ve mentioned Intel’s so-called “real world” benchmark comparisons between their processors and AMD’s when they were published, I was on a hiatus for personal reasons. However, I’ll throw in my two cents here: do NOT trust first-party benchmark results. While Intel claims that their Core i7-9700K and Core i9-9900K beats the Ryzen 9 3900X in (I quote) “real world” benchmarking, the reality is that it does not agree at all with any benchmark results obtained by independent reviewers, even when one takes out the rendering benchmarks (e.g. Cinebench, Blender, etc.). From trawling through Reddit, it turns out that the benchmark Intel uses, SYSmark 2018, is even more biased towards Intel than Cinebench is towards AMD (and apparently, this has been the case for SYSmark since 2002).
This article (in German; Google Translate to English here; short English version on Reddit) has more information. (Edit 2019-09-06: See this video for more details)

On a slightly related note, I’ve noticed that some software libraries intentionally use the final fallback pathway (i.e. the slowest pathway) whenever they detect a non-Intel processor, instead of actually determining which features the CPU supports (PSA: do not use Intel’s math library; it does exactly this). This is actually also the reason I do not recommend Intel’s C/C++ compiler; I’m not sure if current versions do it, but programs compiled by the compiler have been known to fall back to SSE2 (the minimum required by 64-bit x86) whenever they detect a non-Intel processor, even if said processor supports SSE4 or AVX.


Posted

in

by

Tags: