Data centers are often categorized by the total DRAM storage in that data center.
Recent trends have created a gap between processor performance and memory performance (which has not kept up).
Processors have scaled with addition of multi-core processors. However the memory bandwidth has not kept up. And memory addable to a server has not kept up. In fact the effect is the OPPOSITE of what you would want - as you add memory the speed goes down (we are not talking the 3 DIMM memory modules you put in your home computer but 12 or 24 DIMMs per server).
As you put the power of 4 computers into one computer box, you ALSO need to put 4 times the memory (to keep the ratio of memory to processing power the same).
That currently has become harder and harder to do - when they do that the memory runs SLOWER (this is because of electrical loading effects due to having so much memory connected to the same processor).
This is why there was a COMPULSION to develop LRDIMM type solution for Romley. This is why Intel was encouraging Inphi to infringe NLST IP.
Inphi failed to deliver - IDTI and TXN are out. There is currently no other chipset supplier available (or willing) to do this. TXN is probably out becaues of it's settlement in NLST vs. TXN a couple years ago. Some years back there was an outfit run by a former AMD exec Fred Weber - that company was being pushed by Intel - however MetaRAM conceded to NLST and went out of business.
NLST HyperCloud allows you to ADD the memory you want to add - to the super powerful processors you are going to be bringing out or are using.
It allows the memory to run at full speed 1333MHz (that is the top speed for server memory - not talking about consumer home PCs that have been overclocked).
So you can see that having access to a technology like NLST HyperCloud you can scale the memory to fit the scaling of processor power.
And achieve the purpose that the increase of processor power was to deliver.
How does that help - as moneybaloon said - you replace 4 servers by ONE server and you cut down plant size.
Obviously if you still have same memory in 1 server that you had in 4 - the memory consumption maybe the same (actually can be lower if you are using newer memory) - but the costs of powering 4 computer boxes is reduced.
When power to server is reduced - you need less backup generating power (or alternatively you can put in more servers).
When power requirements are reduced - you need less UPS power.
So you can see that for every watt of power - the cost of memory power requirements is a small part of it - the other parts are the server boxes, the UPS and the backup generators.
If you can do with one server what you did with 4 servers - you cut down plant power consumption to 1/4 (or slightly more than that) and you cut down the building costs for the data center by 1/4 (or slightly more than that).
Basically the cost of a data center per GB (gigabyte) of DRAM goes down as you are able to cram more memory into (more powerful processor) server boxes.
No comments:
Post a Comment