5 Questions I Ask Every Customer about their VMware Backup Strategy

I can’t remember of a technology platform that provided better APIs than VMware’s VADP API framework. While it has had annoying bugs periodically, overall the APIs made it extremely simple, easy, and efficient for VMware VM backups. It’s no wonder, there are a huge amount of backup product vendors that all claim features like application consistent and incremental forever backup (using VMware Change Block Tracking (CBT)) and instant recovery of VMs. There really is very little competitive differences now between these products! All 25+ vendors are all calling the same library for CBT capture. But what really matters in terms of costs and RTO is where and how you store that backup data and metadata. In this cloud era, it’s hard not to consider the cloud as a target to store backups and reduce data centre footprint and costs for backup and DR. Here are 5 simple questions I ask all of my current and prospective customers to think about when thinking about their VMware backup strategy. 1. Why have local on-premises backup copies? Why not back it up directly to the Cloud? Not all data is born equal. Multiple studies have shown that it’s important to tier your VMs and then apply backup and retention policies. So for all the Tier-2 VMs, which typically constitute anywhere between 40% to 70%, what if you could eliminate the local copy and backup directly to cloud object storage like AWS S3, S3IA, Azure Blob, Google Nearline, IBM COS? They all offer 11 x 9s of durability in three availability zones. It costs less. There is no capacity management as you don’t have to scramble for storage when you add the next 100 Tier-2 VMs for backups. There is zero operational burden with this approach. Obviously, this is not an ‘all or none’ approach. For your Tier-1 VMs, you might still have a requirement or bandwidth constraint so you can choose to have a local cache/backup copy in your data center, and also have a second backup copy in such a cloud object storage. 2. How long are your restores taking today? Is that acceptable as your data grows… Obviously, if you like the approach of leveraging cloud object storage, your next thought would be “What about the recovery time objective (RTO)?” Most backup products, unfortunately, take a long time to recover from their deduplicated backups stored in cloud object storage. The catch cry for some reason is still around the “backup industry”, but I’ve been calling out that the priority is wrong, it should be called a “recovery industry”. We backup so we can recover! That is what a business is really after when it invests in a backup solution, and more often they can’t afford to wait days and hours to get their critical data back from a dedup engine or tape. Object storage can really help solve two pain points there, it’s infinite in scale, and yet very quick to mount the data back.  Couple it with a next-generation backup product that writes the data in its native application format, and you’re starting to fix a lot of the legacy issues from a backup mindset, versus a recovery mindset! Recovering that 10TB VM or SQL/Oracle Database is just a few minutes away now, It’s a game changer people… seriously! 3. If you are restoring VMs from the cloud, are you concerned about the egress costs? Optimise every bit that moves. One of the concerns enterprises express is the egress charges from the cloud back to on-premises. Let’s explore, with an example, of how much would it cost you on a monthly basis. Let’s assume you have 1000 VMs that are being protected. Let’s assume, on an average 20 restore jobs for files/folders are performed per week, i.e. 80 restore jobs a month. Assume that on an average 100 MB of files are restored in each job. This translates to 80 x 100 MB = 8GB of total data restored from the cloud. Assuming you use AWS S3 IAS (Infrequent Access Storage), it charges $0.01 per GB. The data retrieval charges = $0.08 per month. AWS also charges for the data that leaves AWS cloud at a rate of $0.09 per GB. This translates to $0.72 per month. Thus total costs = $0.08+$0.72 = $0.80 per month, which obviously is very low. Now let’s look at a scenario where 20 VMs are recovered from cloud object storage back to on-premises. Assume average VM size = 200GB. Thus total data transferred = 20 x 200GB = 4,000 GB. Thus total data transfer charges = ($0.01+$0.09)*4000GB = $400 for the entire month. The good news is that even this small monetary amount can be reduced further. Consider a next-generation approach, where not all data needs to be recovered if it’s already on-premises. Features like “delta block differencing” technology will lookup its metadata to compare which blocks already exist in the local backup cache on-premises and transfer only those blocks from the cloud which don’t already exist on-premises. So in the above example, if you assume 40% of the blocks already exist on-premises, only 2,400 GB will be copied from the cloud, thus reducing the data transfer costs to $240. 4. Do you require any data immutability capability at the software and cloud storage layer? Was it a fat-finger or a rogue internal user? One of the legit concerns enterprises have is that of a rogue or malicious user who could potentially delete backups. What if at the software layer, an admin can apply a data immutability lock on backups of specific VMs. Once this is applied, even an admin can not expire or purge the backups for those specific VMs. You can still manage the TCO for disk by setting an expiration date for the backup data, as per the original required policy, or elect to never expire it, yet not be concerned about a rogue admin or a fat finger. 5. Do you like buying hardware appliances? Why not software only, or a SaaS platform? “The world we have created is a process of our thinking. It can not be changed without changing our thinking.” — Einstein Many enterprises are getting used to “as-a-Service” consumption model with exposure to GSuite, Office 365, Salesforce, Github, AWS RDS & Redshift and the likes of VMware Cloud in AWS. So they are also questioning the idea of purchasing hardware appliances for everything, not only backup appliances but also minimising their production compute platforms too. Why not consider a VMware Backup SaaS platform? Why not consume VMware backup and recovery to cloud in a simple per VM subscription pricing model?

Updated Post – vSphere ESXi6.0 CBT (VADP) bug that affects incremental backups / snapshots.

VMware recently posted a new KB article 2136854 to advertise the issue. Which is great that this has finally been accepted and advertised to customers and partners. It’s important to know this is not the same one as posted recently also for ESXi 6.0 (KB 2114076) – now fixed in a re-issued build of ESXi 6.0 (Build 2715440) But it is very similar to KB 2090639 from a historical perspective. The Issue If you are leveraging a product that uses VMware’s VADP for backup, then chances are you are leveraging this for not just initial fulls, but regular incremental snapshots (for backup purposes). There are numerous products on the market that leverage this API, it’s virtually the industry standard to use this feature as it results in faster backups. When the incremental changes are being requested through the API (QueryDiskChangedAreas) the API is requested changed blocks, but unfortunately some of the changed blocks aren’t being correctly reported in the first place, so backup data is essentially missing. And backups based on this can be inconsistent when recovered and result in all sorts of problems. The Challenge Currently there is no resolution or hotfix to the issue from VMware. I hope that we will see something in the coming days due to the wide ranging impact to customers and partner products affected. The Workarounds The workarounds in the KB suggests: Do a full backup for each backup, and that will certainly work, but it’s not really a viable fix for most customers Downgrade to ESX 5.5 and virtual hardware back to 10 (ouch !) Shutdown the VM before doing an incremental From the testing we have done at Actifio, option 3 doesn’t actually provide a workaround either, and options 1 & 2 aren’t really ideal either. The Discovery When Actifio Engineers discovered the issue, we contacted VMware and proved the problem leveraging just API calls to demonstrate where the problem was. How did we discover the issue I hear you ask ? Well we managed to discover the issue via our patented fingerprinting feature that occurs post every backup job. This technique (feature) essentially has learnt to not trust the data we receive (history has proven this feature to be useful many times) but to go and verify it against our copy and the original source copy. If we receive a variance in anyway, we trigger an immediate full read compare against the source and update our copy. This works like a Full Backup job, but doesn’t write out a complete copy again, it just updates our copy to line up with the source again (as we like to save disk where we can!). We’ve seen this occur from time to time with our many different capture techniques (not just VADP), so it’s a worthy bit of code to say the least that our customers benefit from. Let’s hope theres a hotfix on the near horizon, so the many many VADP / CBT vendor products that rely on it, can get back to doing what we do best and that’s protecting critical data for our customers that can be recovered without question. Cheers Patch Update – 24th November 2015 Our team have received a pre-release version of the patch, and it looks good from our initial testing. We expect this patch/hot-fix to be released to the public on or around the 27th November, which is good news for all those with ESXi 6.0 deployed or close to deploying. Patch Released – 25th November 2015 VMware have released the patch – it is available here : http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2137545