Previously I've looked at how various StarCraft II graphic presets and resolutions perform, where are bottlenecks, and how it all fits together. What I didn't get a chance to look at was how DRAM affects StarCraft II performance. In this post I will look at several presets – stock, XMP profiles and profiles obtained through DRAM Calculator for Ryzen (1.6.2).
Profile | Memory speed [MT/s] | CL-RCD-RP-RAS | Command Rate |
2400_Stock | 2400 | 17-17-17-39 | 2T |
3000_XMP | 3000 | 16-17-17-36 | 1T |
3333_XMP | 3333 | 16-18-18-36 | 1T |
3333_Safe | 3333 | 16-19-19-36 | 1T |
3333_Fast | 3333 | 14-18-19-28 | 1T |
3600_Safe | 3600 | 16-20-20-36 | 1T |
3600_Fast | 3600 | 16-19-20-32 | 1T |
3800_Safe | 3800 | 16-21-22-38 | 1T |
3800_Fast | 3800 | 16-21-21-36 | 1T |
When going from the stock all the way up to 3800_Fast we see a performance uplift of +32% in average FPS and +18% in lows. However, 2400_Stock is pretty bad with low frequency and loose timings. You will get most of these improvements even with a basic overclock by turning XMP on. After that performance gains are still there, but not as big as I expected.
Do note that scaling might be different for different CPUs. Faster CPUs will scale better with memory. Ryzen CPUs also scale better because Infinity Fabric speed is tied to the memory speed. However, in this case StarCraft II is mostly running on a single core on one CCX (Core Complex). This means you won't get as big improvements from faster Infinity Fabric like you would get if a program was talking across CCXs.
3800_Safe profile shows some performance regression even when testing multiple times. This is not caused by an instability since 3800_Fast profile with tighter timings works without issues.
These results from AIDA will vary for different memory kits and systems, but it gives a good general idea. Going from 2400_Stock all the way up to 3800_Fast resulted in +55% read/write/copy speeds and -31% latency.
We can again see a dip for the 3800_Safe profile, in this case only memory read and copy were affected. Upon closer investigation I found it's caused "tRDRD SCL" and "tWRWD SCL" subtimings. DRAM Calculator increases them from 5 to 6 in the safe present, which is good for stability but negatively impacts performance. "tRDRD SCL" was responsible for memory read regression, and about half the memory copy regression. "tWRWD SCL" was responsible for the rest.
On this CPU (R5 3600) memory write is limited because the CPU has only one CCD (Core Complex Die). In the Ryzen 3 family, CPUs with twos CCDs (e.g. R9 3900x) have around double memory write speeds. But this shouldn't significantly affect gaming performance.
~ ~ ~
As for the amount of RAM used by StarCraft II, it usually stays under 4GB which means 8GB of RAM is fine depending on background processes. More RAM might only help with consecutive loading times if you have a slow HDD (more data can stay cached).
There are two main tasks for the CPU in StarCraft II – game simulation and everything else (mainly pre-rendering). These tasks are very different from each other. Previously we saw up to +32% increase in average FPS between DRAM profiles. So an interesting question is how much faster do faster DRAM profiles make the game simulation compared to other tasks.
We see that both fast and slow frametimes are improved. However, slow frames are a combination of game simulation time and fast frames, so let's try to untangle this. First separate fast and slow frames:
Now let's subtract the value of surrounding fast frames from slow frames. This will get us the game simulation time. We are assuming (see previous post):
slow frametime(t) = game simulation time(t) + fast frametime(t)
This gives us nicely separated game simulation time from slow and fast frames:
Finally, we can compare average lengths of frametimes and game simulation time between 2400_Stock and 3800_Fast.
Average fast frametime [ms] | Average slow frametime [ms] | Average game simulation time [ms] | |
2400_Stock | 8.74 | 20.62 | 11.44 |
3800_Fast | 6.92 | 17.54 | 10.32 |
Improvement | +26.2% | +17.5% | +10.9% |
Fast frames benefit from faster DRAM the most (+26.2%). Game simulation time is improved as well, but not to the same degree (+10.9%). This means faster DRAM helps more with higher graphical presets and displaying more units on screen, but doesn't help as much with demanding game simulation tasks.
The performance of both is very important since they are running on the same CPU thread. But as the game gets more bottlenecked by the game simulation, more fast frames are dropped, and we are dropping closer and closer to that +10.9% improvement.
Previously measured +32% improvement is thanks to the fact that the ratio of fast and slow frames is higher for faster DRAM profiles. This way we can have +32% increase even if individual frames didn't get +32% faster.
- CPU: R5 3600 (6C12T @4.2Ghz +200Mhz autoOC)
- CPU Cooler: Scythe Mugen 5 Rev.B
- Motherboard: B450 MSI Tomahawk (7C02v1C BIOS – 1.0.0.3 ABBA)
- RAM: 2x16 GB Kingston HyperX Predator (DDR4; Hynix CJR)
- GPU: Gigabyte RX 480 G1 Gaming 8GB (19.9.2 driver)
- Storage: ADATA SSD XPG GAMMIX S11 480GB (OS + StarCraft II)
- Display: 1920x1200 @60 Hz
- OS: Windows 10 Pro (build 18362)
- StarCraft II: 4.10.3.76114 (64bit)
- Graphic preset: Low + Ultra textures
I tested 4 minutes of the same replay (custom Co-op map with a lot of things going on). Frametimes captures by FRAPS – more on this testing methodology. All used timings are here.
GearDownMode was disabled in tests. Command Rate significant affects performance, and in all tests it was set to 1T apart from the stock (2T).
Links to check out