Close

Request a demo

CliQr's groundbreaking technology makes it fast and efficient for businesses to move, manage and secure applications onto any private, public or hybrid cloud environment.

CliQr is now part of Cisco Learn More About Cisco
Request a Demo

Tag Archive: cloud benchmarking

  1. Cloud: How Did We Get Here and What’s Next?

    Leave a Comment

    Screen Shot 2017-03-13 at 11.10.47 AM

    It wasn’t too long ago that companies used on-premises solutions for all of their IT and data storage needs. Now, with the growing popularity of Cloud services, the world of IT is rapidly changing. How did we get here? And more importantly, what is the future of IT and data storage?

    It All Starts with Server Utilization

    In the mid-1990s, when HTTP found its way outside of Tim Berners-Lee’s CERN lab and client-server computing emerged as the de facto standard for application architectures, it launched an Internet Boom in which every enterprise application had its own hardware. When you ordered that hardware, you had to think about ordering enough capacity to handle your spikes in demand as well as any high availability needs you might have.

    That resulted in a lot more hardware than you really needed for some random Tuesday in March, but it also ensured that you wouldn’t get fired when the servers crashed under heavy load. Because the Internet was this new and exciting thing, nobody cared that you might be spending too much on capital expense.

    But then the Internet Bubble burst and CFO types suddenly cared a whole lot. Why have two applications sit side by side and use 30 percent of their hardware most days when you could have them both run on the same physical server and utilize more of it on the average day? While that reasoning looks great on a capitalization spreadsheet, what it failed to take into account was that if one application introduced a memory leak, it brought down the other application with it, giving rise to the noisy neighbor problem.

    What if there was another way to separate physical resources in some sort of isolation technique so that you could reduce the chances that applications could bring each other down?

    The Birth of Virtualization and Pets vs. Cattle

    The answer turned out to be the hypervisor, which could isolate resources from one another on a physical machine to create a virtual machine. This technique didn’t completely eliminate the noisy neighbor problem, but it reduced it significantly. Early uses of virtualization enabled IT administrators to better utilize hardware across multiple applications and pool resources in a way that wasn’t possible before.

    But in the early 2000s, developers started to think about their architectures differently. In a physical server-only world, resources are scarce and take months to expand upon. Because of that scarcity, production deployments had to be treated carefully and change control was tight. This era of thinking has come to be known as treating machines as pets, meaning, you give them great care and feeding, oftentimes you give them names, and you go to great lengths to protect them. In a pets-centric world, you were lucky if you released new features quarterly because a change to the system increased the chances that something would fail.

    What if you thought about that differently, though, given that you can create a new virtual machine in minutes as opposed to waiting months for a physical one? Not only does that cause you to think about scaling differently and not plan for peak hardware if the pooled resources are large enough (remember that, it’ll be important later), but you think about deployments differently too.

    Consider the operating system patch upgrade. With pets thinking, you patch the virtual or physical machine that already exists. With this new thinking, treating virtual machines like cattle, you create the new virtual machine with the new patch and shut down the old one. This line of thinking led to more rapid releases and agile software development methodologies. Instead of quarterly releases, you could release hourly if you wanted to, since you now had the ability to introduce change or roll them back more easily. That led to a line of business teams turning to software developers as change agents for increased revenues.

    Cloud: Virtualization Over HTTP and the Emergence of Hybrid Approaches

    If you take the virtualization model to its next logical step, the larger the shared resource pool, the better. Make it large enough and you could share resources with people outside your organization. And since you can create virtual machines in minutes, you could rent them out by the hour. Welcome to the public cloud.

    While there is a ton of innovative work going on in public cloud that takes cattle-based thinking to its extremes, larger companies in particular are noticing that a private cloud is appropriate for some of its applications. Specifically, applications with sensitive data and steady state demand are attractive for private cloud, which still offers the ability to create virtual machines in minutes even though, at the end of the day, you own the capital asset.

    Given this idea that some applications run best on a public cloud while others run best on a private cloud, the concept of a cloud management platform has become popular to help navigate this hybrid cloud world. Typically these tools offer governance, benchmarking, and metering/billing so that a central IT department can put some controls around cloud usage while still giving their constituents in the line of business teams the self-service, on-demand provisioning they demand with cattle-style thinking.

    What’s Next: Chickens and Feathers (Containers and FaaS)

    Virtualization gave us better hardware utilization and helped developers come up with new application architectures that treated application components as disposable entities that can be created and destroyed on a whim, but it doesn’t end there. Containers, which use a lighter weight resource isolation technique than hypervisors do, can be created in seconds—a huge improvement over the minutes it takes to create a virtual machine. This is encouraging developers to think about smaller, more portable components. Some would extend the analogy to call this chickens-style thinking, in the form of microservices.

    What’s better than creating a unit of compute in seconds? To do so in milliseconds, which is what Function-as-a-Service (FaaS) is all about. Sometimes this technology is known as Serverless, which is a bit of a misnomer since there is indeed a server providing the compute services, but what differentiates it from containers is that developers have to know nothing about the hot standby container within which their code runs. That means that a unit of compute can sit on disk when not in use instead of taking up memory cycles waiting for a transaction to come in. While the ramifications of this technology aren’t quite yet understood, a nanoservice approach like this extends the pets vs. cattle vs. chickens analogy to include feathers.

    Conclusion

    Just in the last 25 years or so, our industry has come a remarkably long way. Financial pressures forced applications to run coincident with siblings they might not have anything to do with, but which they could bring crumbling to their knees. Virtualization allowed us to separate resources and enabled developers to think about their application architectures very differently, leading to unprecedented innovation speed. Lighter weight resource isolation techniques make even more rapid innovations possible through containers and microservices. On the horizon, FaaS technologies show potential to push the envelope even further.

    Speed and the ability to adapt to this ever-changing landscape rule the day, and that will be true for a long time to come.

    This blog originally appeared on Cloud Computing Magazine.

  2. Top 7 Myths About Moving to the Cloud

    Leave a Comment

    If you make your living by selling computer hardware, you’ve probably noticed that the world has changed. It used to be that the people who managed IT hardware in a big company—your buyer—had all the purchasing power, and their constituents in the line-of-business teams had no place else to get IT services. You’d fill up your quota showing up with a newer version of the same hardware you sold the same people three to five years ago and everybody was happy.

    Then AWS happened in 2006. Salesforce.com already happened in 1999. Lots of other IaaS and SaaS vendors started springing up all over the place, and all of a sudden, those constituents your IT buyer had a monopoly on had another place to go for IT services—places that enabled them to move faster—and the three to five-year hardware refresh cycle started to suffer.

    But selling is still selling, and there are a lot of myths out there about why someone like you can’t sell cloud. Let’s debunk a few of them now.

    Myth #1: My customers aren’t moving to cloud.

    If your IT buyer customers haven’t seen decreased budgets for on-premises hardware, consider yourself lucky. In their cloud research survey issued last November, IDG Enterprise reported, “Cloud technology is becoming a stable to organization’s infrastructure as 70% have at least one application in the cloud. This is not the end, as 56% of organizations are still identifying IT operations that are candidates for cloud hosting.” If your IT buyer isn’t at least considering spending on cloud, it’s almost guaranteed that there is Shadow IT going on in their line-of-business teams.

    Myth #2: I don’t see any Shadow IT.

    Hence, the “Shadow” part. As far back as May of 2015, a Brocade survey of 200 CIOs found that over 80 percent had seen some unauthorized provisioning of cloud services. This doesn’t happen because line-of-business teams are evil. It happens because they have a need for speed and IT is great at efficiency and security but typically lousy at doing things quickly.

    Myth #3: My customer workloads have steady demand.

    While it is true that the cloud consumption model works best with varying demand, when you zoom in on almost any workload and examine utilization on nights and weekends, there is almost always a case to be made that any particular workload is variable.

    Myth #4: Only new workloads run in cloud.

    Any new technology takes on a familiar pattern where organizations try less risky, new projects that don’t necessarily impact the bottom line so they have a safe way to experiment. Cloud computing has been no different, but the tide has started to shift to legacy workloads. This past November, ZK Research compared cloud adoption to virtualization adoption by pointing out, “The initial wave of adoption then was about companies trying new things, not mission-critical workloads. Once organizations trusted the technology, major apps migrated. Today, virtualization is a no brainer because it’s a known technology with well-defined best practices. Cloud computing is going through the same trend today. ”

    Myth #5: Cloud is too hard to learn.

    Granted, cloud computing is different than hardware refresh, but as a sales executive it’s still about building relationships. The biggest changes relate to whom you build those relationships with (which now includes line-of-business teams instead of just IT, but that’s who has the money anyway) and utilizing the subscription-based financial model of cloud consumption. (Gartner has a great article explaining the basics.)

    Myth #6: I don’t have relationships with business teams.

    Certainly, there is some overhead involved in building new relationships as opposed to leveraging your existing ones, but increasingly the line-of-business teams are retaining more of their IT budgets so investing that time will pay off. Even CIO Magazine admits that CMOs will spend more than CIOs on technology in 2017. Go to where the money is—it’ll be worth your time.

    Myth #7: I don’t get paid on cloud.

    This one is, admittedly, the trickiest in this list because some of the solution is in the hands of your company as opposed to within your power directly. If you work for a value-added reseller, public cloud vendors have plenty of programs that indeed pay you on cloud. Even if that isn’t the case, though, educating yourself on public/private cloud differences and building those relationships with business teams can help preserve sales for the hardware a customer public cloud ultimately runs on.

    Another step would be to get acquainted with a Cloud Management Platform, which is software that helps an Enterprise manage workloads across both public and private clouds from a single pane of glass. Moving up the stack to this control point can help you stay involved in key decisions your customer makes and put you in the position of a trusted advisor.

    Selling is fundamentally about building relationships, understanding problems and providing solutions. Regardless of the technology encased in the solution, that will always be true. There is a learning curve involved with cloud adoption, meeting new people you haven’t worked with before and potentially adapting to a subscription-based model, but the fundamentals remain the same of providing value to someone who has some roadblock they need help getting around.

    This blog originally appeared on Salesforce Blog

  3. Why Cloud? Justification for Non-techies

    Leave a Comment

    Cloud-door-access-e1447168509180

    Cloud computing is all the rage today, to the point that it feels like you can’t fill out your “buzzword bingo” card at any meeting without using the phrase. There are all kinds of technical reasons why cloud has the market momentum it does, but what if you aren’t swayed by such things? If you’ve seen technology trends come and go, you need non-technical justification for moving your business in any direction, and cloud computing is no different for you.

    So, what is the main justification for business owners to use cloud that doesn’t involve a lot of technical jargon? Let’s get to the bottom line and talk ROI and payback instead.

    Asset Procurement Financials Before Cloud

    If you go back to a simpler time, before virtualization was even a thing — let alone cloud computing — the financial justification process for IT or any other kind of capital asset was pretty much the same:

    1. Spend a lot of money up front on equipment
    2. Wait for that equipment to be installed and configured correctly
    3. Reap gains to your own business based on this new equipment for years to come.

    In this model, it is common to estimate what the expected annual gains were and to calculate a payback period. In other words, how long will it take you to recoup your investment made in Year 0 before the equipment is even installed? When weighing options against one another, those with shorter payback are more attractive than those with longer payback.

    Another way to judge different choices against one another is with an ROI calculation. Take the total anticipated returns, subtract the total investment, divide at difference by the total investment and multiply by 100.

    The difficulty with either payback or ROI approaches is that you are left to estimate the total returns. In other words, you don’t really know what benefits you’ll receive with your large, up-front purchase — you have to estimate it. And since typically this kind of analysis is made over a three-to-five-year period, it can be awhile before you figure out whether or not you’re wrong. And if you are, it can be a very expensive proposition to correct it.

    Enter Cloud: Getting More At-Bats More Quickly

    Instead of having to wait two years to figure out if your estimated returns are correct, wouldn’t it be better to find out sooner? And instead of estimating the returns in the first place, wouldn’t it be better if you could find out the actual benefits sooner? Also, I bet you’d rather pay as you go instead of investing all that money up front and hoping the return comes at all, right?

    These are the real business benefits of cloud. In baseball terminology, it’s about getting more at-bats, or put another way, more cycles with your technology investment by trying options that don’t require the long installation lead times. That allows you to quickly evaluate the benefits of the investment with a smaller up-front investment and either celebrate the genius of your choice or admit defeat and move on to an alternative.

    Think about what that means for the ROI calculation. The lone value in the denominator of that equation is the total investment. Lower that, and whatever number is in the numerator will look better.

    For the payback, this cloud model enables a business not to estimate returns based on a lot of spreadsheet mechanics that are influenced by the sales team trying to get your investment, but instead can be based on your actual use of the technology as soon as possible with an option to stop without financial penalties. That lets you gather more detailed data on your financial returns sooner.

    Cloud Is Not About Tech, It’s About Speedy Investments

    This is the real takeaway here: Speed. In the modern economy, it is more efficient to try technology investments, quickly determine if they deliver the benefits they promise, and move on than it is to go through some long sales cycle that is followed by an even longer installation process to find out whether or not the equipment you purchased was a huge waste of resources or not. It is OK to fail, just do so quickly so you can cross that wrong answer off your list and move closer to whatever the right solution is. Doing that over and over again with solutions that you pay for as you go is a far better use of your budget.

    This piece originally appeared on betanews.

  4. Setting the Standard—Benchmarking Price-Performance on the Cloud

    2 Comments

    With an increased focus on exploiting a wider variety of business applications on the cloud and a broader choice of available cloud providers, enterprises need to focus on moving applications to the right cloud—not just any cloud or multi-cloud. Such a decision is often driven by factors that include the underlying cloud infrastructure’s capabilities, metrics such as availability and API reliability on the cloud, and compliance conditions including geographic location and security standards.

    While these are important, a key metric towards this decision-making is the application’s price and performance across different cloud providers and cloud instance types. While the driving motivator to adopt clouds is often increased performance, scalability and cost-savings, an application’s price and performance on different clouds are the only true measure for evaluating the cause and affect of selecting the right cloud. Benchmarking clouds cannot therefore be a simple mathematical spreadsheet exercise. Any cloud benchmarking must include key price, performance and other application-centric metrics actually derived from the application being deployed and managed to determine the  “RIGHT” cloud for a given application.

    Every cloud is built, sized and priced very differently, which means that application price and performance varies greatly on different clouds and different configurations within each cloud. Price-performance also varies by different application type, architecture, behavior and usage characteristics. The fact is, despite the market noise, until recently, the ability to easily and simultaneously benchmark price and performance of applications across disparate cloud environments did not exist.

    Cloud infrastructures today do not provide application level SLAs. Any capabilities, performance and price data is purely limited to infrastructure components such as VMs, storage, and networking. These do not translate directly to application price and performance.

    Different clouds have very different underlying physical infrastructure components such as CPU, network backbone, and storage types as well as different virtualization stacks. Moreover, clouds are themselves, variable environments with significant variance in load over time. Different virtualization management including variations in VM placement policies may mean added differences in performance, not just between clouds, but also over time, within the same cloud. In the absence of transparency around VM instance and policies, it is not possible to accurately determine the differences in application performance on different clouds without migrating an application and testing the application performance on each cloud option.

    Moreover, cloud instances are “packaged” and priced very differently as well. Given the above lack of transparency about cloud instances and physical backbone, an apples-to-apples comparison based on infrastructure alone is not possible. For example, what is a “small” instance type on one cloud is rarely the same as a “small” instance type on another cloud— will the vCPU’s on both provide the same performance—or will an equivalently priced “medium” instance on yet another cloud provide a overall better price-performance trade-off? Or maybe it is network performance, not CPU that matters for a particular application. Also, rolling up all the different cloud costs to estimate application costs is not straightforward as cost, performance and instance definition and configuration are inextricably linked. Understanding this and these dependent variables is what is required to understand application performance, and because of the cloud’s utility-based pricing model, better application performance may mean fewer infrastructure resources needed and hence lower pay-per-use costs. It is this type of empirical benchmarking that is required to make informed decisions on where to deploy an application on the cloud.

    Given all this, a plain infrastructure-to-infrastructure comparison is not an effective means to benchmark clouds for application price-performance. As an example, consider a multi-tier web application with a highly transactional database component and with high I/O requirements between the application server and the database tier. Additionally, the application tier may be elastically scalable. A useful performance metric for such an application may be the number of requests it can handle per second while a useful cost-metric would be the total costs of all tiers combined including storage, compute and network costs. Moreover, one may want to test these metrics for different load settings to see how they change as the application scales. A cloud with a high I/O network backbone, an SSD instance type for the database tier and low VM spin-up times may provide better performance for such an application but at a high cost while a different cloud with “standard” options but lower load might provide not too degraded a performance at lower costs for a better overall tradeoff.

    As a different example, consider a highly compute-intensive gene-sequencing application where gene-sequencing jobs may be processed by an elastic cluster. A useful performance metric for such an application may be the time to complete a gene-sequencing job while a useful cost-metric would be the total pay-per-run job cost.

    Accordingly, here are four examples of real-world applications—each of a different architecture type and infrastructure needs. While benchmarks can be done against any Public or Private clouds, for this study, these applications were benchmarked across following clouds with different configurations in terms of instance types and cluster size on each:

    1. HP–HPCS standard.small and standard.2xlarge configuration.
    2. Amazon–AWS m1.medium and m3.2xlarge configuration.
    3. Google–GCE n1-standard-1 and n1-standard-8 configuration.

    The findings of benchmark study are described below with each application type. The charts on the left show application price on the x-axis and performance on the y-axis. The performance criteria can be throughput (number of requests per second) or the total time to complete a workload. The charts on the right show a price-performance index, a single normalized metric to see which cloud and configuration option provides the best “bang for your buck”.

    benchmark-chart1

    Chart #1: Benchmark for three-tier Java Web Application with each tier running on a separate VM.

    benchmark-chart2

    Chart #2: Benchmark for compute-intensive application run in parallel on a cluster.

     benchmark-chart32

    Chart #3: Benchmark results for Hadoop job running on four nodes.benchmark-chart4

    Chart #4: Benchmark results for high performance cluster computing job. 

    To summarize, the benchmark results for four different applications had following results as recommended cloud based on app price-performance trade off. Clearly, there is no single cloud instance that performs best for all types of applications.

    Application Type Medium Configuration Extra Large Configuration Recommended Cloud
    Java Web App Cloud C Cloud B Cloud C Medium Config
    Parallel Processing job Cloud C Cloud B Cloud C Medium with More Nodes
    Hadoop App Cloud A Cloud A Cloud A Extra Large
    High Performance Cluster Computing Job Cloud A/Cloud B Cloud B Cloud B Medium with More Nodes

    As may be clear from such examples, real-world complex enterprise applications need more than a simple spreadsheet-based back-of-the-envelope cost-estimate and infrastructure based performance analysis.

    No wonder that many enterprises today find themselves having migrated to a cloud environment only to discover significant variations in spending and performance than estimated.

    Let’s get back to what matters—finding the right cloud, and yes, clouds do indeed matter. For many reasons, application price and performance in different cloud environments vary greatly. What’s needed is an efficient way to find the right cloud for the application and continue to ensure complete portability so that the application can continue to move to the right cloud, with no additional migration—based on latest performance and price changes across clouds.

CliQr Technologies, CliQr CloudCenter and CloudBlades are trademarks of CliQr Technologies, Inc. All other registered or unregistered trademarks are the sole property of their respective owners.