**Say What ???**
OK, So I’ll start by saying that I am taking a little bit of a marketing liberty here… But please let me explain further. It might save you a bundle of cash.
Traditional and So called Next-Generation VMware backup solutions, while living in a so-called Software Defined world, drag a heap of hardware sales with them (think storage, compute and networking), and while this is still valid and required for many customers, with ubiquitous access to good communications infrastructure these days (WAN/Internet throughput), it’s becoming less of a requirement, and in many customers no longer required.
So how is this achieved:
a) Object Storage – This could be considered infrastructure to some, but for the purpose of this blog post and discussion, I am referring to infrastructure as something you need to purchase, install into your premises or Data Centre’s, and then continue to manage it over the course of time, until it reaches its end of life in say 5-7 years time. Object Storage in a public cloud, really is only partially connected to on-premises infrastructure that most people purchase today in that it can be classified as storage (infrastructure like), but you don’t need upfront capital (CapEx) budget to consume it, you pay-as-you-go, and beneficially it doesn’t suffer a lifecycle refresh cost over 5-7 years, and if you cease your use of it or shrink down, you can just pay less or stop paying full stop.
b) Good communication links – In my discussions with many enterprise and mid-sized businesses today, I am hearing a lot more discussions around building zero-trust networks for their new office spaces, along with a rapid shrinking of compute resources in a comms cabinet or local Data Centre on-premises. What is a zero-trust network I hear you ask? Well think of it like an office location, with just a link to the internet, no local servers or complex infrastructure, a couple of printers maybe, and all end user devices just jump on the internet link in that office (most likely wifi) and consume enterprise SaaS and hosted applications over the zero-trust network. Nothing in that site is assumed to be behind a corporate firewall, as all devices are effectively isolated, or contained to act like they are on a public network. Zero-Trust networks help reduce onsite complexity and shrink costs to some degree as there is less requirements for local infrastructure, while end users can work from the office or home with equal access to corporate resources that are all secured and protected using VPNs and Two Factor Authentication, etc. If you are on the path with a strategy to use zero-trust networks, then reducing considering infrastructure-less backup should also be an important stepping stone on that path, to reducing your on-premises infrastructure.
Reliable and low-latency communications links are now readily available in most metro locations, and the costs are now very reasonable in most countries and major city locations. In my region of Australia/New Zealand, where internet costs for enterprises were extremely excessive many years back, they are now very reliable, fast and reasonably priced.
So, when mixing the benefits of good on-premises communications combined with object storage in a public cloud, you now have a very reliable offsite backup target, where incremental for-ever backup data can be stored (make sure you are leveraging encryption in flight and at rest).
With some of the most innovative data protection products on the market, there are now very few reasons that actually warrant the purchase of large swathes of on-premises storage, and compute for your data resiliency requirements. It’s now the perfect time to consider these two considerations when reviewing your existing or future strategy to reduce costs.
Do I have a reliable and fast internet or direct connection link already? If not, then some on-premises disk might still be of value to locally stage your backups, but check and see if the costs of local disk, rack, power and cooling are more expensive than a faster internet link and object storage over time.
Do you hate spending CapEx budget on disk arrays? If you already have good links in place, is that backup target disk array from either your preferred storage provider, or worse a software defined backup vendor selling you a heap of their expensive disk. When you go down this path, it’s a lay-up to proprietary lock-in (basketball term), which means you will have to manage this disk out in ~5-7 years time. What-if the dedup calculations are wrong (have they ever got it right?), and you end up needing far more disk than expected? This is an all too familiar story I hear from disgruntled customers.
c) Do you hate the recovery speeds from proprietary dedup appliances that are expensive to purchase. Dedup does not deliver a great RTO or RPO for that matter. It’s ideal for short to medium term retention, and dangerous for long-term retention (lock-in to a vendors proprietary system). If you need rapid RTO from an historical dataset, then consider looking at native-format backup solutions that leverage object storage as the target. The high number of spindles and quite often flash backed object storage result in some really fantastic RTO’s that need to be seen to be believed. How long does it take you to retrieve a 5TB dataset from 2yrs ago, can all your VMs, Filesystems and Data be accessible in a matter of minutes? The RTO will not please you, as there’s a good change you need to wait for a full rehydration of the dataset, don’t forget the hidden egress charges if you want to recover that back on-premises.
- Is your business looking at encryption options for on-premises VMs and Databases? This conversation is occurring more frequently as customers get cloud savvy, and are looking to consume infrastructure in an agile manner. If you are starting to look at ways to consume Dev/Test in public clouds? (encryption becoming mandatory from a security profile), and or potentially enhance your Disaster Recovery plan to a public cloud (saving on a secondary Data Centre). If these are considerations for you, then as mentioned above, using a traditional or even so-called next generation backup solution, is not going to help your bottom line. Encrypted data will normally end up looking like a complete full backup of your dataset each day, so look forward to spiralling costs for your dedup pool growth, not to mention more license costs and longer recovery times. I would happily argue sometimes, many so-called next generation backup vendors, are already running you on previous generation ideas and not keeping pace with modern requirements. So ask the hard questions now, continuing down the same path, is fraught with danger.
So how do I summarise the points I am trying to get across. It’s relatively simple diskless (Infrastructure-less) backup solutions for VMware are now available in the market. When combined with good communications links (to get the data offsite in a timely fashion) are now a reality, they help reduce your costs greatly, improve the RTO, and most importantly avoid long term retention and licensing lock-in pain.
When I present this challenge to many customers, they are a little perplexed that there has to be a hitch or hidden cost, and while you do still need some software (which requires some disk and compute) running on-premises to move the data from on-premises to cloud, there should be nothing stopping you giving this a go for yourself, as it is all upside in the longer term.
You can be up and running in about 1-2 hrs, and benefiting from a true next-generation solution by the end of the day. Take a look around at the market, the game is changing, and most vendors are still playing checkers, while others are winning at chess. Feel free to drop me a line or hit me up on twitter if you’d like to know more.
Cheers and Beers,