News and Press

Portfolio company achievements and cleantech industry news.

Diablo Technologies Makes Memory Cheap With Memory1

Diablo Technologies is spruiking a new kind of memory.

Diablo’s new product, Memory1, is a larger, cheaper form of main memory that uses flash storage instead of DRAM to provide a cheap-and-deep option for in-memory applications.

The explosion in interest in data and analytics in recent years has spurred the creation of a host of in-memory options for performing calculations, such as Apache APA +5.88% Spark, Redis, Hadoop and others. Instead of constantly fetching data from persistent storage into a limited amount of memory, entire datasets can be loaded into memory for processing, which provides dramatic improvements in speed.

But there’s a catch.

DRAM is still relatively expensive, and even with large budgets, there’s only so much memory that can be physically attached to a server. Getting large working sets into memory is still a tricky proposition.


Memory1 uses an operating system driver to mediate access to RAM and Memory1 (Source: Supplied)

Enter Memory1.

Memory1 puts the same kind of flash that you’d find in SSDs into DDR4 compatible DIMMs, just like sticks of memory. Unlike Diablo’s ULLtraDIMM product, Memory1 is not persistent, so the flash doesn’t store state, again just like memory.
And unlike DRAM, you can fit a lot more flash onto a single DIMM: 256GB, which is four times the largest DRAM module, according to Diablo. Flash itself is cheaper than DRAM, and getting cheaper every day, so putting a lot of memory into a server becomes much more affordable.

There is a trade-off: flash isn’t as fast as DRAM, so some DRAM is still required to achieve performance. Diablo have an operating system driver that acts as a cache manager, moving data from Memory1 into and out of DRAM as required.

Memory1 isn’t as fast as DRAM, but then Diablo point out that DRAM isn’t as fast as you think it is. If you’re running something on more than a single core, which all massively parallel processing applications do, then the way Non-Uniform Memory Access (NUMA) works means you end up having to fetch data from memory closer to another CPU. In practice, you don’t get the theoretical maximum bandwidth and low latency of single core memory access anyway.

With all trade-offs, it more about what you can do for the equivalent amount of effort. Having storage of any type is a tradeoff, because ideally we’d like to keep all data in memory all the time. Storage is just slow memory, and Memory1 is the fastest slow memory available thus far.

There are a lot of intriguing possibilities for affordable amounts of memory in the terabyte range, and Diablo might just do well by making them, well, possible.

Contact Us

We are always on the lookout for entrepreneurial teams with a vision of how to make a significant impact on a large market. Whether you are already on your way or just getting started, we want to hear from you.

Reason for Contact:

10 Tozeret Ha'aretz
Second Floor
Tel Aviv, Israel
Get Directions »

T: +972-3-644-6611