I love buffets. (Especially when they are all you can eat). But what I really like is the ability to choose what and how much I want. This is where I know that the use of cloud services will become useful. When we don’t have to make an ‘either/or’ decision.
How work gets done has completely changed in just the last few years.
Fact is: business moves fast. When application developers or business leaders can’t get what they need the official way… it has never BEEN SO EASY (to go around the IT department).
Public Clouds offer so much less friction… no PO required… so can you blame them?
Third party cloud providers are doing a great job providing attractive, easy to use services. They have earned that business.
Who cares how it splinters your own company…puts your data at risk…or makes it almost impossible to transition from development to production?
We offer a couple of options for you here at Cisco:
Get Tough. We have tools that can help you identify ‘shadow IT’, those rogue operations. Find them and put the hurt on ‘em. It’s against policy… you have the company rule book to back you up.
Address the Real Issue. Just give them what they want. They are following the path of least resistance… so make it easy.
We should all be thinking of ourselves as internal service providers. We have to compete and serve company interests viewing the world as it is, rather than as we wish it would be.
What is it my mom would always say?
“You can catch more flies with honey than with vinegar.”
So how would we go about doing that?
I suggest we look towards a few winners recently announced by The Software & Information Industry Association (SIIA):
Best Cloud Infrastructure: Cisco ONE Enterprise Cloud Suite
But also, for Best Cloud Management Solution: CliQr CloudCenter… now called ‘Cisco CloudCenter’ because that team is now part of Cisco and these two winners are now integrated.
In this episode we uncover why these applications are winning awards, and what kind of pain we can help get rid of.
Thank you to TechWiseTV alum Joann Stark for bringing this one to market… and for introducing me to the smart and energetic Zach Kielich.
We will be doing a live workshop on this topic around August 18. Please subscribe to our twitter feed (@techwisetv) and monitor that for updates on where to register.
With an increased focus on exploiting a wider variety of business applications on the cloud and a broader choice of available cloud providers, enterprises need to focus on moving applications to the right cloud—not just any cloud or multi-cloud. Such a decision is often driven by factors that include the underlying cloud infrastructure’s capabilities, metrics such as availability and API reliability on the cloud, and compliance conditions including geographic location and security standards.
While these are important, a key metric towards this decision-making is the application’s price and performance across different cloud providers and cloud instance types. While the driving motivator to adopt clouds is often increased performance, scalability and cost-savings, an application’s price and performance on different clouds are the only true measure for evaluating the cause and affect of selecting the right cloud. Benchmarking clouds cannot therefore be a simple mathematical spreadsheet exercise. Any cloud benchmarking must include key price, performance and other application-centric metrics actually derived from the application being deployed and managed to determine the “RIGHT” cloud for a given application.
Every cloud is built, sized and priced very differently, which means that application price and performance varies greatly on different clouds and different configurations within each cloud. Price-performance also varies by different application type, architecture, behavior and usage characteristics. The fact is, despite the market noise, until recently, the ability to easily and simultaneously benchmark price and performance of applications across disparate cloud environments did not exist.
Cloud infrastructures today do not provide application level SLAs. Any capabilities, performance and price data is purely limited to infrastructure components such as VMs, storage, and networking. These do not translate directly to application price and performance.
Different clouds have very different underlying physical infrastructure components such as CPU, network backbone, and storage types as well as different virtualization stacks. Moreover, clouds are themselves, variable environments with significant variance in load over time. Different virtualization management including variations in VM placement policies may mean added differences in performance, not just between clouds, but also over time, within the same cloud. In the absence of transparency around VM instance and policies, it is not possible to accurately determine the differences in application performance on different clouds without migrating an application and testing the application performance on each cloud option.
Moreover, cloud instances are “packaged” and priced very differently as well. Given the above lack of transparency about cloud instances and physical backbone, an apples-to-apples comparison based on infrastructure alone is not possible. For example, what is a “small” instance type on one cloud is rarely the same as a “small” instance type on another cloud— will the vCPU’s on both provide the same performance—or will an equivalently priced “medium” instance on yet another cloud provide a overall better price-performance trade-off? Or maybe it is network performance, not CPU that matters for a particular application. Also, rolling up all the different cloud costs to estimate application costs is not straightforward as cost, performance and instance definition and configuration are inextricably linked. Understanding this and these dependent variables is what is required to understand application performance, and because of the cloud’s utility-based pricing model, better application performance may mean fewer infrastructure resources needed and hence lower pay-per-use costs. It is this type of empirical benchmarking that is required to make informed decisions on where to deploy an application on the cloud.
Given all this, a plain infrastructure-to-infrastructure comparison is not an effective means to benchmark clouds for application price-performance. As an example, consider a multi-tier web application with a highly transactional database component and with high I/O requirements between the application server and the database tier. Additionally, the application tier may be elastically scalable. A useful performance metric for such an application may be the number of requests it can handle per second while a useful cost-metric would be the total costs of all tiers combined including storage, compute and network costs. Moreover, one may want to test these metrics for different load settings to see how they change as the application scales. A cloud with a high I/O network backbone, an SSD instance type for the database tier and low VM spin-up times may provide better performance for such an application but at a high cost while a different cloud with “standard” options but lower load might provide not too degraded a performance at lower costs for a better overall tradeoff.
As a different example, consider a highly compute-intensive gene-sequencing application where gene-sequencing jobs may be processed by an elastic cluster. A useful performance metric for such an application may be the time to complete a gene-sequencing job while a useful cost-metric would be the total pay-per-run job cost.
Accordingly, here are four examples of real-world applications—each of a different architecture type and infrastructure needs. While benchmarks can be done against any Public or Private clouds, for this study, these applications were benchmarked across following clouds with different configurations in terms of instance types and cluster size on each:
HP–HPCS standard.small and standard.2xlarge configuration.
Amazon–AWS m1.medium and m3.2xlarge configuration.
Google–GCE n1-standard-1 and n1-standard-8 configuration.
The findings of benchmark study are described below with each application type. The charts on the left show application price on the x-axis and performance on the y-axis. The performance criteria can be throughput (number of requests per second) or the total time to complete a workload. The charts on the right show a price-performance index, a single normalized metric to see which cloud and configuration option provides the best “bang for your buck”.
Chart #1: Benchmark for three-tier Java Web Application with each tier running on a separate VM.
Chart #2: Benchmark for compute-intensive application run in parallel on a cluster.
Chart #3: Benchmark results for Hadoop job running on four nodes.
Chart #4: Benchmark results for high performance cluster computing job.
To summarize, the benchmark results for four different applications had following results as recommended cloud based on app price-performance trade off. Clearly, there is no single cloud instance that performs best for all types of applications.
Extra Large Configuration
Java Web App
Cloud C Medium Config
Parallel Processing job
Cloud C Medium with More Nodes
Cloud A Extra Large
High Performance Cluster Computing Job
Cloud A/Cloud B
Cloud B Medium with More Nodes
As may be clear from such examples, real-world complex enterprise applications need more than a simple spreadsheet-based back-of-the-envelope cost-estimate and infrastructure based performance analysis.
No wonder that many enterprises today find themselves having migrated to a cloud environment only to discover significant variations in spending and performance than estimated.
Let’s get back to what matters—finding the right cloud, and yes, clouds do indeed matter. For many reasons, application price and performance in different cloud environments vary greatly. What’s needed is an efficient way to find the right cloud for the application and continue to ensure complete portability so that the application can continue to move to the right cloud, with no additional migration—based on latest performance and price changes across clouds.
Check out this contributed piece on WIRED from our CEO…
If I’ve heard it once, I’ve heard it a thousand times… a CIO’s main concern with the cloud has been all around security and the cost of running applications 24/7. Well, there is ONE case that makes complete sense to run in the cloud and that is development and test. Even though I could argue that security is better in the cloud than on-premise, I’ll save that for another blog article since security is not a real concern when doing development and test.
As we’ve all been hearing for the last few years, “The Cloud” is going to revolutionize our lives with scalable, on-demand computing and storage resources to suit all our software requirements. Soon we will require only a simple device to access our digital needs from anywhere in the world, and that ugly beige box under our desks can disappear. As the first generation of applications made its way to the cloud, it seemed as if the promise of the technology may already be a reality – witness Netflix hosting their entire video-on-demand service on Amazon’s cloud. However as user demand grows, and the next generation of software that runs mission critical operations in businesses and healthcare begin their cloud deployments, the situation becomes more complex. Each cloud provider is beginning to differentiate themselves to stand out from the crowd, and the choice of which cloud to host your service on is becoming a significant component of both the technical and business decision making process.
At Transformatix Technologies Inc., and our subsidiary BioLinQ, we encountered this issue early in the process of porting our software to the cloud. As providers of digital healthcare services, we felt that offering our Continuum suite as cloud based Software as a Service (SaaS) was a straightforward business decision – rapidly scalable, on-demand, and no hardware installation or support issues. As we began our selection of our cloud providers however, we began to see that the picture was not so clear. With four major components of the Continuum suite to port we quickly found that each component had its own unique requirements that resulted in there being no single cloud provider that could deliver optimum performance.
The Transformatix Continuum components had the following needs:
A biobank specimen tracking package—24/7 access, fast interface response, but low bandwidth, storage, and computation needs
A medical data warehousing and collaboration tool—huge storage requirements, high bandwidth for data in, low computation needs
A medical image sharing and annotation tool—large storage requirement, high bandwidth for data out, global access, moderate computation
A bioinformatics toolkit—huge storage requirements, intense computation, high bi-directional bandwidth
Apart from the need for security (HIPAA compliance) each application had widely varying requirements and in our estimation we would need to be using at best two, and likely three, different cloud providers to achieve best results for each component. Making a decision today to hard-wire our apps to any one cloud could significantly reduce required flexibility in the future. How then, to direct our development resources to best provide the Continuum suite as SaaS?
It was during this dilemma that we encountered CliQr, with their ability to ‘lift and shift’ applications quickly into a cloud, and between clouds. Working with their team, we at Transformatix imported one of our tools to their platform to test, and found that we achieved comparable performance to any dedicated single cloud implementation, with less effort than performing that implementation ourselves. While the ease of implementation was a benefit, it was the ability to shift between clouds quickly, that rapidly became the most useful feature to us.
With CliQr, as the cost of moving and testing to each cloud was effectively zero. We began to test each cloud for their applicability for our tools, and in a few weeks had narrowed down the top performers for each tool. We could setup each tool to work with the best provider, and then in the event of an outage failover to the next choice.
The capability to rapidly test multiple clouds’ performance, and deploy accordingly, is invaluable to Transformatix in ensuring best performance for the customer, lowest cost to us, but also the most important thing – maximum uptime.
It’s a rapidly changing world, and for Transformatix, it’s clear we need to be moving just as fast to provide the needed capabilities to our customers.