Both Developers and IT Ops know that application performance comes at a price that is paid for with either developer time or bigger infrastructure. The CliQr “App 1st” approach makes it fast and easy to make the right choice AND avoid lockin.
Application performance can be improved by tuning code. For example, tweaking SQL queries can have a big impact on how the application performs in production. But that configuration work takes time and effort, and at some point has diminishing returns.
Performance can also be improved by deploying more infrastructure resources. And every cloud service provider is more that happy to help move you up to next bigger more expensive instance.
You really need hard data about price and performance tradeoffs to effectively evaluate options and make good decisions. If you can read cloud rate cards (like this and this) and figure out which environment and which configuration is your application’s sweet spot – you have truly advanced cloud skills!
It isn’t always clear how much performance is improved by increasing the size of the underlying infrastructure resources. CPU, memory, network latency, as well as storage I/O speed all come into play. And the impact of more bigger cloud for a content delivery network with web-tier content cached globally will be different than the impact of larger instance size for a computation intensive modeling app need massive compute resources for a 2 day job.
Is spending time needed to tune SQL worth it? Or is it better to bite the bullet and pay more to achieve the same result?
Is the price performance sweet spot found buying using fewer bigger instances, or adding more smaller instances?
The “Modeling 1st” approach creates lockin
One way to try make informed decisions is what I’d call a “Model 1st” approach. This is where you spend time early in the project using spreadsheets or other analytical tools to create a theoretical performance model that predicts how the application will work in different clouds, and with different resource configurations.
In effect, you pick your path. You code your application for a target cloud, and at the end of the project you are locked in. And what’s worse – you need to deploy to that target cloud just to find out if your model is correct. In essence – you have spent precious time on analysis which doesn’t add lines of code or accelerate time to market. And if your model is wrong, it is hard and expensive to change your mind.
There has to be a better approach
What if you could try before you buy so to speak. What if you could avoid the theory and eliminate the non-value added modeling and analysis effort, and simply deploy your app stack to multiple environments and see which one works best?
What if you could deploy your app stack on an AWS EC2 Compute Unit, and a Microsoft Azure Core and a Google Compute Engine – all at the same time — and compare side-by-side price performance for each?
What? You say. Is there a magical multi-cloud deployment tool that lives with the unicorns and mythical full-stack developers in cloudlandia?
The CliQr “App 1st” approach
No, there is CliQr. What is the CliQr “App 1st” approach? That is where you skip the placement analysis, and instead spend precious time blueprinting the application in a drag-and-drop topology builder. Once published to the marketplace, you can deploy applications to any supported cloud. You can offer self-service on demand to users. You can change your mind and move an application if requirements or constraints change.
To optimize application performance and make data driven placement decisions, CliQr CloudCenter has a benchmarking feature that one-click deploys your app stack to multiple clouds simultaneously and sends back a price performance report.
You can deploy a stack to multiple cloud vendors or multiple zones across a single vendor to get price performance curve for each. Or, deploy a stack across multiple resource configurations on a single cloud.
CliQr delivers hard data such as an extra large resource configuration delivers 20 percent performance improvement for and extra $100 per month.
Have you seen the variation?
Don’t just take my word it. GigaOM research from Paul Miller shows huge price performance variation from cloud to cloud. And even a 70% variation zone to zone from a single cloud vendor as seen in Figure 1.
Figure 1: Single cloud provider 70% price variation across zones
I strongly suggest you read this report if you think theoretical modeling approach to picking the right cloud for your application or making performance optimization decisions, gives you a better answer than a “try before you buy” approach.
With hard price performance data, you still have to decide if a 20 percent performance improvement is worth another $100 per month. Or given that tradeoff, if spending time optimizing code to get the same improvement is worth the effort.
But CliQr makes it easy to find the sweet spot with data you need to make an informed decision. And makes it easy to move your app if you change your mind.
Next week I’ll share an example using benchmarking to find the optimal home for a brownfield application.