Monday, November 21, 2011

Can Netlist Get Facebook?

HyperCloud: Facebook perf improvement 30% or more... 19-Nov-11 09:56 pm
1) Facebook's estimated number of servers is over 100,000. They grew their number of servers from 30,000 to 60,000 in 6 months by Jun 2010.

2) Facebook released released its "server configuration" specification for 6 different apps:

http://semimd.com/blog/2011/11/08/facebo...

For Database applications, Facebook specified it is looking for 144GB of memory.

3) NLST released white paper, demonstrated a performance improvement of more than 30% even when populated with 144GB of HyperCloud memory compared to regular RDIMM.

http://www.netlist.com/products/ppt/Netl...

Some concurrent inquiries where running even 10x faster(!).

BTW, HP is also a SyBase partner

4) Many of Facebook's servers are HP ProLiant DL380 G7, capable of being populated with up to 384GB of regular RDIMM. It is one of the most popular server in the world.

http://h10010.www1.hp.com/wwpc/us/en/sm/...

5) NLST has already demonstrated HyperCloud on HP ProLiant DL380 back in SuperComputing conference in 09.


Conclusion:
1) The recent EXCLUSIVE collaboration agreement between NSLT and HPC will wind up targeting Facebook needs for performance.

2) Facebook could use 16 HyperCloud 8GB (retail price $192 at Memory4Less) to populate 144GB servers for a total of approximately $3,000 per server.

3) Facebook alone could represent a revenue potential of $300M for NLST.


NLST $90M market cap is ridiculous!!!

Netlist at UBS Conference

The UBS conference goes over the Romley market, but has some estimates for the DDR4 market (3 years from now) - Chris Lopes extrapolates the conservative (1% of servers) estimate to give a $7.5B figure.


quote:
----
at the 02:25 minute mark:

The market for cloud server units is expected to grow at about a 20% clip over the next 4 years.

So it's an exciting market for us.

If we take a very quick snapshot of the size of the market for us.

Industry analysts are estimating about 20% of the newest and latest Intel (INTC) family of servers - Romley family - would use a "load reduced" or a "rank multiplied" memory.

And that's what we call our "HyperCloud" memory.


at the 02:55 minute mark:

Whether you agree with that or not, let's just use a ONE percent (1%) of that number - to keep the math really simple.

You get an idea of how big and how fast this market can grow.

So if we use a 1% estimate - there's about 9M servers sold in the world this year - so let's take 1%, let's call it a 100,000 servers for next year.

And each server that uses high density memory typically fully loads that memory - and that can be anywhere from 12 to 24 sockets (DIMM sockets/slots) in each of these servers.

Let's just use 10 to 12 (sockets/slots) to keep the numbers easy again.

So we take a 100,000 servers - we take 10 DIMMs per .. 10 memory modules in each one - you've got a million (1M) units.

Well millions of anything is not a great market .. USUALLY. Because we are talking semiconductors in the conference here today .. uh .. chips are $10 to $20, $30 .. but in our case we are selling subsystems.

And our subsystems average between the 16GB and 32GB around $500 each.


at the 03:50 minute mark:

So even at a very very conservative estimate .. 1% of the servers, only 10 DIMMs per server, we are looking at a $500 ASP (average selling price) or $500M in revenue for next year.

Now that's a pretty significant growth from where we are today .. so .. hence the excitement about the opportunity in this market.
----



quote:
----
at the 15:10 minute mark:

So now we .. remember we talked about what the market looked like for next year and we just said .. "well if it was just 10%" .. uh .. or 1% rather .. 1% of the market.

We had a $500M market opportunity.

Now let's look out for 2014 .. because as we increase to DDR4 speeds, the frequency goes WAY up .. and when the frequency goes up, the effect of the bus .. the memory bus is huge.


at the 15:30 minute mark:

So the industry's estimating they'll need 50% of all servers .. and it will be about 13M to 14M units, up from 9M today .. uh .. will require some kind of "load reduction" (technology) .. HyperCloud-type technology (or the LRDIMM which is infringing NLST IP - though LRDIMMs have latency issues).

If we take 10% of that, let's call it 1.3M servers and let's use 12 DIMMs per server, as an average .. now the densities move up .. so instead of 16GB and 32GB today, we'll talk 32GB and 64GB .. 3 years from now .. we're looking at ABOUT a $7.5B market size.


at the 16:05 minute mark:

So .. significant growth .. we think we are well positioned for where the industry NEEDS to go, where it wants to go, and how to get there.

And our technology scales very well .. along that.


I hope we can see more applications using future NVvault, for example, as a Non-Volatile standard memory removing the need for UPS (Uninterruptable Power Supply) all-together.

NetVault or NVvault are used for RAID storage because in case of power outage, sensitive RAID-related status info is lost (which is still in memory).

If you use NetVault/NVvault you be sure that that info will not be lost.

It is like an automatic "Hibernate" feature (as on Windows computers) - there it hibernates to hard disk, while with NVvault it is done to the memory module itself (which has flash memory to match the DRAM memory) and a backup is made to the flash memory there.

This is for servers etc. - however if you talk about RAID products for the home user, you can see how valuable this can be.

The replacement of battery-backed by "supercapacitor" backed (as in more recent NVvault vs. the earlier NetVault-BB battery backed) means you don't need periodic maintenance to replace the battery.

Which this saves money for server applications, if you talk about home appliances or home RAID storage products it could make many products viable which were earlier not feasible. For instance a home RAID storage product that used battery-backed memory would require the home user to replace the battery - and this type of intervention by the home user is not practical for many products (esp. products which run on mains AC supply and don't have their own battery).

For instance a home RAID storage product that used battery-backed memory would require the home user to replace the battery - and this type of intervention by the home user is not practical for many products (esp. products which run on mains AC supply and don't have their own battery).

Sorry - I meant if the memory module relies on it's own battery (called "battery-backed" non-volatile memory) - then the user would have to periodically replace that battery - something which may not be viable for products that tout a plug-and-play type of use for home users.

In general NVvault maybe useful for any appliance that runs on mains AC (which can be prone to glitches or outages).

Netlist

Data centers are often categorized by the total DRAM storage in that data center.

Recent trends have created a gap between processor performance and memory performance (which has not kept up).

Processors have scaled with addition of multi-core processors. However the memory bandwidth has not kept up. And memory addable to a server has not kept up. In fact the effect is the OPPOSITE of what you would want - as you add memory the speed goes down (we are not talking the 3 DIMM memory modules you put in your home computer but 12 or 24 DIMMs per server).

As you put the power of 4 computers into one computer box, you ALSO need to put 4 times the memory (to keep the ratio of memory to processing power the same).

That currently has become harder and harder to do - when they do that the memory runs SLOWER (this is because of electrical loading effects due to having so much memory connected to the same processor).

This is why there was a COMPULSION to develop LRDIMM type solution for Romley. This is why Intel was encouraging Inphi to infringe NLST IP.

Inphi failed to deliver - IDTI and TXN are out. There is currently no other chipset supplier available (or willing) to do this. TXN is probably out becaues of it's settlement in NLST vs. TXN a couple years ago. Some years back there was an outfit run by a former AMD exec Fred Weber - that company was being pushed by Intel - however MetaRAM conceded to NLST and went out of business.

NLST HyperCloud allows you to ADD the memory you want to add - to the super powerful processors you are going to be bringing out or are using.

It allows the memory to run at full speed 1333MHz (that is the top speed for server memory - not talking about consumer home PCs that have been overclocked).

So you can see that having access to a technology like NLST HyperCloud you can scale the memory to fit the scaling of processor power.

And achieve the purpose that the increase of processor power was to deliver.

How does that help - as moneybaloon said - you replace 4 servers by ONE server and you cut down plant size.

Obviously if you still have same memory in 1 server that you had in 4 - the memory consumption maybe the same (actually can be lower if you are using newer memory) - but the costs of powering 4 computer boxes is reduced.

When power to server is reduced - you need less backup generating power (or alternatively you can put in more servers).

When power requirements are reduced - you need less UPS power.

So you can see that for every watt of power - the cost of memory power requirements is a small part of it - the other parts are the server boxes, the UPS and the backup generators.

If you can do with one server what you did with 4 servers - you cut down plant power consumption to 1/4 (or slightly more than that) and you cut down the building costs for the data center by 1/4 (or slightly more than that).

Basically the cost of a data center per GB (gigabyte) of DRAM goes down as you are able to cram more memory into (more powerful processor) server boxes.

Monday, March 14, 2011

可编程处理

可编程处理正在改变什么?(视频、白皮书)

"集成了FPGA 架构、硬核CPU 子系统以及其他硬核IP 的半导体器件SoC FPGA 已经发展到了一个“关键点”,它在今后十年中会得到广泛应用,为系统设计人员提供更多的选择。对于在FPGA 上开发的系统,这些SoC FPGA 完善了十多年以来的软核CPU 以及其他软核IP。各种技术、商业和市场因素相结合推动了这一关键点的出现,Altera、Cypress半导体、Intel 和Xilinx 公司等供应商都发布或者开始发售SoC FPGA 器件。"

Friday, February 18, 2011