Have you ever awoken and realized that something that seemed like it happened yesterday was actually seven years ago?
DDR4 is what it stands for.
Isn’t that true?
This material was released in 2014.
There’s a new youngster in town, so I’ll see you later.
DDR5 desktop platforms are on the horizon, and it’s time to get to know the fifth
– generation memory technology that’ll be boosting our speeds and capacities for at least the next few years, starting with a physical look at what’s different and then diving deeper into what makes this new breed of RAM stand out.
My first experience with a full retail DDR5 memory kit.
On the surface, it doesn’t appear to be that different.
It even has the same 288 pins as DDR4 memory, but I doubt you’ll be able to fit it into the same slot.
Only the most dedicated users will be able to mix up their memory generations now that the key has been shifted.
And it’s not without reason.
One of the most noticeable differences between DDR4 and DDR3 is instantly obvious on our bare board.
Take a look at this.
The memory module’s power management integrated circuit, or PMIC, has been relocated from the motherboard.
The PMIC’s job is to convert one of the usual output voltages from your computer power supply, in this case five volts, to the lower 1.1 volts that the DDR5 chips on the module require.
This step was critical in achieving the signal integrity enhancements required to push DDR5 to speeds 50 percent faster than the previous generation.
And, if this supposed leak plan is to be believed, even further.
One strange side consequence of this is that, despite the fact that DDR5 runs at approximately 10% lower voltages than DDR4, which should reduce power draw, the onboard PMIC will not operate at 100% efficiency, which means we may wind up with a little bit of waste heat on each module.
G.SKILL informs me, however, that the clip
on RAM fans of the DDR2 era are unlikely to resurface.
Those were some of the worst things I’d ever seen.
They were loud, and the fans were constantly failing.
Moving the PMIC to modules has the additional impact of increasing the cost of the individual modules.
DDR5 modules are expected to be much more expensive than DDR4 modules of the same capacity after the more sophisticated PCB design and the early adopter fee are factored in.
In theory, some of this cost could be offset by removing power management from the motherboard, but given the ongoing global semiconductor shortage, not to mention the inclusion of PCI express gen 5 on these upcoming platforms, which comes with its own set of costly trace routing challenges, I’ll be surprised if this happens.
The good news is that DDR 5 has some very amazing features that aren’t immediately apparent on a spec sheet.
I’d understand if you thought the launch JEDEC DDR5 frequency of 4,800 mega transfers per second was unexceptional compared to something like this G.SKILL kit on NewEgg, which is rated at a blistering 5,300 mega transfers per second, especially considering that CAS latency, or the number of RAM cycles required to fulfill a data request, is expected to be in the neighborhood of double compared to last gen.
But here’s the thing: there’s a catch.
Remember how we recently wrote a blog about how frequency alone doesn’t provide a complete picture of performance?
For starters, the memory controller in your DDR4 compatible CPU was not meant to handle such high rates.
As with any sort of overclocking, it’s a bit of a gamble whether or not it’ll work with super
fast modules like those.
And, at a certain point, there are inherent bottlenecks on the memory ICs, which are the chips on the module, which prevent them from fully taking use of any higher speed.
This section is a little tricky, but stick with me.
Each IC has two
grids of bits, or zeros and ones, inside it, which are referred to as banks.
These banks are grouped together into bank groups, and everytime a bank group fires off the data needed by the CPU, that bank group requires some time to recover.
Other bank groups fire one after the other to fill a burst buffer during that time.
You may think of it as a minigun, with each barrel representing a bank group and the bullets representing data bits firing into the buffer, except what if the module is operating at such a fast speed that we roll back to our first bank group before it’s recovered?
That is an issue.
That could be the stumbling block.
As a result, DDR5 increased the number of bank groups from four to eight in order to remedy the problem.
That gives each bank group a lot more time to cool down, and it almost guarantees that we’ll be able to take advantage of speeds far beyond the 6,000 mega transfers per second of first
OSI kits like this Trident Z5 here, and it gets even more interesting if you’re into this sort of thing, which you obviously are if you’ve read this far.
The problem is that, while the mini pistol analogy helps us grasp bank group cool
downs in the real world,
transferring ones and zeros to the CPU individually would be extremely wasteful.
Instead, suppose that our small pistol is firing all of these bits into a burst buffer, which acts as an intermediary buffer.
And we can imagine this as a single shotgun round full of bits being fired at the CPU all at once.
Isn’t that a little more powerful?
DDR4 modules now have an eight
bit burst length and are connected to the CPU through a single 64 –
bit bus or communication channel.
So, with an eight
we could argue that our fully automatic DDR4 shotgun fires 64 pellet rounds.
Bang, bang, bang, bang, bang, bang, bang, bang, bang, bang, bang, bang, bang, bang, bang, bang, bang, bang,
So, if we multiply 64 bits by eight rounds, we have a total of 64 bytes of data every burst before our bank groups need to reload it.
Have you been following along thus far?
DDR5 modules have a significant impact on this.
We have two 32
bit sub channels that can run separately instead of a single 64
So, let’s get back to our shotgun.
We fire smaller shells with only 32 bits each, but our burst length or magazine capacity is increased to 16 each burst.
So, if we map it up here 32 bits times a burst length of 16, we get 64 bytes each burst, which is the same as DDR4.
Except now we have two barrels, each with its own 16
that can fire independently.
But don’t go overboard.
This isn’t a dual
and you can’t simply add up the total potential capacity.
In the workstation and server arena, you’ll still want to run multiple DDR5 modules in dual
mode or more channels to increase your memory bandwidth.
Efficiency and latency are the true benefits of these distinct sub channels.
If you only have 32 bits of data in the burst buffer in DDR4, you may just fill the rest with garbage before sending it to the CPU.
This takes time, which means the CPU will have to wait.
You don’t have to wait any longer.
If that’s all that’s needed right now, you can only send 32 bits and the CPU won’t have to wait, and there’s more.
DDR5 integrated circuits
As a result, individual memory chips now have a simple type of ECC (error correcting code) that is absolutely transparent to the end user.
It can’t be turned off, and it helps the IC maintain stability during high-speed data storage and transfers.
In my opinion, this was long overdue, but I’m glad we’re finally getting it, especially considering that unregistered DDR5 DIMMs, the kind that just goes in your regular desktop computer, are expected to hit capacities of 128 gigabytes on a single stick in the coming years, and load
could go as high as four terabytes per module with a combination of improved density and die stacking.
But let’s take a step back for a moment.
DDR5 isn’t a miracle silver bullet, though, despite its advantages.
Overclock spec DDR4 is projected to exceed this base spec DDR5 at the same frequency, let’s say 4,800 mega transfers per second.
You could assume this isn’t a problem.
Isn’t all you want to do now is overclock your DDR5 and go faster?
It might not be as straightforward as that.
Are you familiar with the on
module power management IC?
It turns out that there are two distinct varieties of them.
One is not intended to exceed the default voltage range of 1.1 to 1.435 volts.
The other type, which must be deliberately incorporated into your module at the time of building, is a programmable node, which may also go as high as it wants; there doesn’t appear to be a limit for that one.
As a result, expect to see some fairly unique modules in the future, as well as some pretty exotic cooling.
Since DDR5 is also receiving an SPD or speed chip facelift, even non-OSI modules should wind up being quite intriguing.
It also now handles signaling to the power management IC and any other microcontrollers on the module, such as RGB lighting controllers, instead of only holding default frequency and latency settings, which normally contain both a stock and an overclocked or XMP setting.
As a result, I anticipate to see more innovative lighting applications than we’ve ever seen before.
And, believe it or not, that is exactly what the industry requires.
There’s more RGB.
Thank you for taking the time to read this.