Burbank CA-based Condusiv Technologies has released the 6.0 version of its V-locity software, which improves application performance in virtualized environments. The key enhancement is to the server-side DRAM read caching engine, which makes it three times faster than the previous version.
“V-locity is now our crème de la crème product on this,” said Brian Morin, Condusiv’s SVP of Global Marketing. “It shares much the same technology as Diskeeper, but V-locity has a DRAM read caching engine, which Diskeeper does not. So the two of them working together gives a very significant reduction in I/O.”
As recently as the end of 2012, Diskeeper was by far the more significant product, in terms of contribution to the company’s revenues, with V-locity accounting for around 10 per cent of revenues. However, the dramatic rise of virtualization and flash have seen V-locity’s significance rise to the same degree.
“Diskeeper was our flagship offering for years and years, but today, V-locity has exceeded Diskeeper in terms of sales, because virtualization and flash have made it much more important,” Morin said.
V-locity is typically used for I/O-intensive applications where I/O is much less likely to be sequential.
“V-locity is not for every application,” Morin said. “I/O intensive ones like SQL, Oracle and SAP which have I/O in small random blocks is where we are a perfect fit. For applications which have more large sequential I/O, we aren’t as good a fit.”
V-locity 6.0’s secret sauce is comprised of two key technologies that reduce I/O from VMs to storage. The first, which it shares with Diskeeper, is Condusiv’s proprietary IntelliWrite engine that adds a layering of intelligence into the Windows OS to eliminate I/O fracturing, so writes (and subsequent reads) are processed in a more contiguous and sequential manner. The second, which is unique to V-locity, is IntelliMemory DRAM read caching. The big performance boost in 6.0 comes from enhancements to this read caching.
“The DRAM read caching, has been greatly enhanced in V-locity 6.0 by focusing on serving only the smallest, random I/O,” Morin said. “While that engine has been in V-locity for 3 years, it was a massive engineering breakthrough with the caching algorithms that created the 3x performance increase.”
Morin said this involved rethinking how to best use DRAM, a cache which is limited in capacity, but which is enormously fast. By focusing on cache effectiveness rather than simply the number of cache hits, V-locity determines the best use of DRAM for caching purposes. It collects data on a wide range of data points – storage access, frequency, I/O priority, process priority, types of I/O, nature of I/O (sequential or random), and time between I/Os. It then leverages its analytics engine to identify which storage blocks will benefit the most from caching.
“People think they need a big pool of caching to service as many cache hits as possible close to the server,” he said. “DRAM isn’t for servicing ALL I/O. At 3-4 GB per VM, it isn’t big enough. But it is big enough to service the problem I/O – that small random I/O. In 6.0 we use the DRAM to target the problem I/O, by using your DRAM, the fastest storage media, to target the I/O that steals your bandwidth and creates a lot of noise. That’s what gives the performance gains.” Iometer testing indicates that these gains are 3.6X when processing 4K blocks, and 2.0X when processing 64K blocks.
“We completely started over and rewrote the algorithm to target the kind of I/O that causes the most noise and churn,” said Rick Cadruvi, Condusiv’s chief software architect. “We were hoping doing this would produce a 50 per cent performance increase, and instead we got a 300 per cent increase. Sometimes you get lucky.”
V-locity 6.0 will ship in the first week of July.