FB-DIMMs still alive and kicking
But Nvidia may see it off
IT STANDS ACCUSED of having the latency of a turtle, and is said to eat power like a hungry hog - these are some of the thoughts about the Fully Buffered DIMM idea these days among certain memory buffs.
And there are persistent rumours about Intel canning the whole FB-DIMM thing in its server chipsets from next year, but Intel itself vehemently denies these.
Indeed, Intel says the FB-DIMM bus provides for simultaneous read and write transactions (not unlike, say, PCI-Express) and reads from multiple DIMMs, meaning there is no dead time between data transfers.
With its point-to-point links, FB-DIMMs were also supposed to solve the impendance problem, allowing more DRAMs on the same bus - remember the Iwill (now Flextronics) dual Woodcrest mainboard with 16 FB-DIMM sockets? With 8GB FB-DIMMs, that's 128GB of RAM in a deskside workstation - very nice for EDA simulations of next-generation north bridges, for instance. And, of course, around quarter of the pins per channel required.
The problem is still the heat generated by those 16 FB-DIMMs: at the recent Computex, I nearly burned my fingers brushing the top (not to mention its even hotter side!) of an edge DIMM in that 16-DIMM board setup. Each AMB memory buffer chip per 667MHz FBDIMM takes over 6W extra power, and creates extra heat on top of that from the DRAM dies. After all, this is not just simple buffering, but a major bus conversion too.
Intel still vigorously defends the FB-DIMMs in this space. And, after all, Tigerton MP CPUs with their quad-FSB Caneland chipset will also use FB-DIMMs. However, the most interesting question is what kind of memory will the upcoming Nvidia Nforce chipset for the Intel dual-FSB Woodcrest and Clovertown chips use? If that ends up to be standard DDR2 and/or DDR3, it could accelerate the (probably inevitable) end of the fully buffered DIMM saga.
The INQuirer
|