Yes, it’s that time of the year again… time to think about submitting a session proposal for one or both of the VMworlds. Do you like to present, are you active in your local VMUG? Do you want to share your storie, your experiences, your case study with the community? Have you integrated VMware solutions and technologies in an innovative or unconventional way? Do you have a compelling case study to share with VMware customers? If so, the Call for Papers is now open and we invite you to share your submission with the greater VMware community!
Before you begin, please review the Abstract Submission Guidelines for tips on preparing your titles and descriptions; this will help you avoid common pitfalls that lead to abstract rejections. And the competition is stiff, so do please be thoughtful and thorough when crafting your proposal. When ready, submit your VMworld 2015 session abstract here. All proposals must be submitted by April 28, 2015. Sorry, no exceptions. If your session is chosen, you’ll earn our gratitude, respect from your peers, and a discounted or free conference pass.
Not sure where to start? Browse these key areas of the 2015 agenda.
- Software-Defined Data Center
- Software-Defined Data Center General
- Cloud Infrastructure
- Software-Defined Storage and Business Continuity
- Networking and Security
- Operations Transformation
- Virtualizing Applications
- Hybrid Cloud
- vCloud Air
- vCloud Air Network
- End-User Computing
- Desktop and Applications
- The Mobile Enterprise
- Mobility Perspectives and Solutions
Pernixdata FVP 2.5 was released about a week ago. A new version of a very interesting tool which can be used to offload IO from your storage device to your vSphere hypervisor. So far, so good… or is it? What does it do for you, when should you use this technique? What pain points can it solve? Which business requirements can be met when deploying FVP 2.5? First off all, Frank Denneman has written a series of brilliant articles about Memory, explaining UMA, NUMA, memory subsystems, DDR4 and how to optimize memory for performance (http://frankdenneman.nl/2015/03/02/memory-deep-dive-summary/). I mention this because of the importance of memory in offloading IO from your storage array to your hypervisor. He also wrote in detail about the new developments in FVP 2.5, so I am not going to blindly repeat that, just read what he wrote. As he is a high ranked employee of Pernixdata, he can tell you more and better about the new 2.5 version than I can. I’ll just state some highlights here. With 2.5 Pernixdata introduces Distributed Fault Tolerant Memory (DFTM): this provides the ability to store replicas of write data to flash or RAM acceleration resources of other hosts in the cluster. DFTM allows for seamless hot-add and hot-shrink of the FVP cluster with RAM resources. Furthermore FVP 2.5 offers intelligent IO profiling and RBAC.
Basically what Pernixdata FVP do? By taking IO processing away from the storage array and moving it to the hypervisor, it lets a storage array do what it does best… offer storage. At the same time, it brings IO to where it is needed most: the virtual machine. Why else would we insert IO read and write cache flash memory cards in vSphere hosts with VM’s with a high IO profile? With FVP 2.5 you can use local SSD’s, PCI-e flash memory cards like SanDisk Fusion IO cards ànd local memory. As you can see in the graph below, memory is way faster… so could handle more IO, at least theoretically.
With this added technology that integrates seamlessly with a vSphere hypervisor, I can now use cheaper disks in my array without losing performance. I am less dependent on the amount of IO my spindles can handle, because IO is managed in the hypervisor. And with FVP 2.5 the cached IO is not only on the host where it is generated, it is cached on one or two server in the same FVP-enabled cluster. So if my host would go down for whatever reason, the cached IO are not lost. VMware HA will restart the VM’s and FVP will try to get the IO to the corresponding VM’s on their new hosts as fast as possible, increasing high availability and reducing the risk of data loss.