Are people really repatriating from the Public Cloud and why?
Unfortunately, this is a reality. There has been a lot of buzz in recent times about organisations repatriating from the Public Cloud back to their On-Premises Data Centers. This article looks at the reasons this happens and how you can avoid it. There are 2 reasons companies move workloads back from the Cloud; Cost and Value. The scenario plays out like this- they went to the Public Cloud and the costs are rising far above expectations and it’s becoming a resume generating event, which means someone’s about to get fired. Wasn’t the Cloud supposed to be cheaper!? This usually results from a lift a shift and the Cloud bill has rapidly doubled beyond control and expectations. They expected to save money, but it hasn’t. For organisations adopting a Hybrid approach they are also still paying for their data centre, so they can bring workloads back to save money.
Money vs Value
The issue here is that money-cost savings- is the main metric for success of a Cloud application for most companies. It really should be value. The challenge is, did you take something that has a fixed cost and put it somewhere you have unlimited spend, but it doesn’t have unlimited value? This is especially valid for an internal application. For the bill, the sky is the limit, but the value is fixed. The cost is too high, and the value isn’t there. The public cloud needs significant analysis and planning because it doesn’t work for every workload.
Like an episode of Shark Tank
During the assessment phase we need to think about who our real customers are. That is who is going to use the service and get benefit from it? For example, if we role play it like an episode of “Shark Tank” where you get asked questions like; “how much do you make it for?” “how much do you sell it for?”. Unfortunately, most of us simply don’t know the answer to these questions about our application portfolios. That is “what is the real costs to run it?” and “how much money do we make from this application?” If you can’t answer this, you won’t know the values you’re getting (or not getting in the cloud). In general, internal applications don’t gain in value from unlimited scale.
How do you quantify value for an internal application?
This is a hard question to answer because it is different for each organisation. For some organisations it is about security, for others it is customer base or bottom-line impact, for others it is the cost to run and for others it is the revenue generated from it. Most people simply don’t know. So, we usually default to the application with the most servers or the one that has the most technical debt. A common example is their finance system. The issue is the way we normally determine value in the enterprise is, if we turn it off how loud are the screams.
What about higher level services?
The higher-level services are very glamorous, but most architects can’t articulate the value they’re getting from the higher-level services whether it’s PaaS, CaaS or FaaS. What most public cloud vendors are selling is CaaS- Confusion as a Service. There are so many new technologies constantly coming out (Containers, Functions, Kubernetes and on and on), they don’t know what to pick. So, they do what’s popular or cool and they wing it. A lot of architects are saying their strategy is anything but IaaS, but it’s not based on data or value its based on what’s cool and what looks good on their CV. There’s a perception that the higher up the stack is better. It may be, but we still need to work out if it adds more value.
Is it just VM’s that are coming back to the Data Centre?
Both lift and shift style virtual machines and cloud native workloads are being repatriated. This happens because the 2 most popular misconceptions about Public Cloud are: the cloud will be less secure and the cloud will be cheaper, while the Public Cloud is more secure and also more expensive. Unfortunately to much cool aid has been consumed in Las Vegas! The cloud can be cheaper, but organisations need to change their behaviour. If you run something like you did in the data centre for 24 hours a day and never turn it off or scale it down, it will be a lot more expensive. if you size it for peak, it will be more expensive. You have to right size it.
What about elasticity?
A lot of people got burnt during COVID-19 because the elasticity at the bottom is not the same as at the top. In theory my cost should go down as my usage goes down but that’s not always a reality.
How do we actually save money in the Public Cloud?
Firstly, there’s a lot of specialisation in the cloud with a huge array of instance types designed for specific workloads. Time needs to be spent identifying the right one for your application. These include; CPU chipsets, memory optimised, storage optimised, GPU’s, Machine Learning and many more. If you take the time before migrating, you can really optimise your footprint. Don’t just pick M5’s. To save money you have to either downsize from on-premises or run it for less time. It’s that simple. Consider if you can achieve what you need in half the time or using half the resources because the instance is such more efficient or optimised for your workload. If your application can scale out, you can auto scale up or down as per utilisation. This requires right scaling, right sizing and right timing. If we can automate a lot of this the outcome will be even better.
PaaS vs IaaS
There are tipping points where PaaS (Platform as a Service) is better than IaaS at the lower end of usage. However, it’s totally different at the top end- it reverses. In some situations, it can be as much as 10 times as much as running on-premises for things like data base as a service. The public cloud vendors practically give it away or make it really cheap at the bottom end but really sting you as it grows, and your data is in there and its locked in and you can’t get out easily.
If you are interested in our other Cloud, Infrastructure and Security related Blogs please click here.