Close

Request a demo

CliQr's groundbreaking technology makes it fast and efficient for businesses to move, manage and secure applications onto any private, public or hybrid cloud environment.

CliQr is now part of Cisco Learn More About Cisco
Request a Demo

Tag Archive: AWS

  1. Cloud Agnosticism: Does the Survival of Current Cloud Providers Matter?

    Leave a Comment

    Cloud computing

    Remember that time when S3 went down? How about when SSL certificates in every Azure data centre expired at the same time, bringing every region down at once? Did you enjoy working with an independent public cloud vendor like SoftLayer, only to have them get acquired by IBM?

    Whether you’re dealing with downtime, an acquisition, or an exit from the market entirely, how much does the survival of your cloud provider matter? A lot, actually. And that’s why a cloud agnostic approach to your data and applications is best.

    Settling of the public cloud market

    In recent years, there has been a settling in the public cloud market. Azure and AWS are the big two in that space with Google and IBM close behind, but sometimes organisations explore other options.

    Regional service providers have grown adept at hosting VMware-based clouds, and their intimate knowledge of their customer base enables them to customise the experience in a way that the big guys can’t scale down to quite as well.

    >See also: Want to understand agile working methods? Look at start ups

    Similarly, Digital Ocean continues to be successful across multiple geographic regions by catering to the developer market.

    In fairness to the big guys mentioned above, AWS and Azure have done an excellent job at combatting high availability issues that caused them to suffer highly publicised downtime.

    The truth of the matter is, they possess resources and expertise on a scale that is difficult to match elsewhere, and they now provide great guidance on how to best use their platforms to ensure your own data protection and security.

    But, fool me once, shame on you, fool me twice, shame on me. It doesn’t happen a lot, but it’s not unprecedented for an entire public cloud infrastructure across multiple locations to go down.

    You’ll need to find a way to operate if it happens again or if some unforeseen acquisition misaligns with your enterprise agreements.

    The services trade-off

    One aspect of examining a cloud agnostic approach is the appeal of the extended services that the larger public cloud providers offer, such as managed databases, load balancers, queues, and notification services.

    These services can dramatically accelerate time to market for applications and allow developers to focus on the business logic to solve the problems specific to your needs.

    >See also: Staying ahead of the digital wave

    The trade-off is that their use also tends to lock you into a particular provider, given that there is little to no commonality between such services across different providers.

    Cloud agnostic approaches

    So, how can you take a more cloud agnostic approach to protect yourself from one of these scenarios where your cloud provider goes away, goes down, or gets acquired by someone you aren’t as in sync with? Here are a few approaches to consider:

    1. Multi-cloud backups

    The simplest approach is to simply back up your data to a different provider than the one you use to collect that data in the first place. This also allows you to take advantage of that data by spinning up your applications that depend on it on the second cloud.

    In other words, treat cloud providers like you treat individual private data centres. Run production on one, but back up and have a cold standby on another.

    2. Hybrid cloud applications

    A more complicated but more robust approach is to build your applications in such a way that they have a global load balancer on top of application stacks that run on different clouds.

    Imagine one set of web and database servers running on one cloud, a second set running on another, and a global load balancer that either actively sends traffic to both stacks (in which case the trick is keeping application state synchronised between them) or sends all traffic to one stack but treats the other as a warm standby.

    This approach takes more effort, but shrinks turnaround time to get up and running on the alternate cloud.

    3. Cloud management platforms

    Instead of trying to manage all of this yourself—or to make it easier to choose which of the two approaches discussed here to use on an application-by-application basis — consider enlisting the help of a cloud management platform (CMP).

    >See also: Consolidation: a database prediction

    While still an emerging product family for which Gartner has a Market Guide but not yet a Magic Quadrant, these tools provide a single view of application deployments across different cloud providers and tend to provide an abstraction layer to make it easy to migrate an application from one vendor to another.

    Some provide governance and metering/billing tools so that system administrators can dictate who is allowed to deploy applications to which cloud and put some guide rails on spending. Benchmarking tolls within a CMP can be useful as well so that more direct price/performance comparisons can be made among different vendors.

    Next steps

    There are several ways you can proceed toward a world where you aren’t locked into one cloud vendor and subject to problems that can occur if that vendor has downtime, gets acquired, or disappears.

    Deciding between the time to market speed of utilising cloud-specific services but increasing lock-in is among the most difficult decisions to make when trying to build a cloud agnostic solution.

    That’s where CMPs can help by adding abstraction on top of multiple clouds. Regardless of your approach, giving yourself options as cloud models mature is key to being nimble enough to take advantage of future benefits as they unfold.

    This piece originally appeared on information age

  2. Shadow IT: How Big Is the Problem and Why Is ITaaS the Antidote?

    Leave a Comment

    How big is your Shadow IT problem?

    Don’t have one, you say?

    Are you sure? Have you checked those corporate AMEX records lately? They probably have entries on them for places like Amazon Web Services and Heroku. In other words, you probably have a Shadow IT problem even if you think you don’t.

    I worked in HP IT when Mark Hurd (now at Oracle) was CEO of Hewlett-Packard and Randy Mott (now at General Motors) was CIO. The official company line was that Shadow IT was punishable by termination. It happened anyway. Back then, things like rogue WiFi devices or SaaS accounts at destinations like Salesforce.com or WordPress.com were the big culprits. If the threat of losing your job isn’t enough to keep it at bay, what is?

    And, while we’re at it, what’s the antidote to Shadow IT? It’s something called IT-as-a-Service, but before getting into how this solves the problem of Shadow IT, it is important to understand why it exists in the first place.

    Speed Kills, and It Created Shadow IT

    If you built a time machine and went back to 2006, before the Amazon Web Services beta started, you’d find a very different business environment than what we have today. Company functions like sales, product development and marketing were still responsible for bringing in company revenue. Other functions, like HR, legal, finance and IT were necessary evils that kept the business running but didn’t contribute to the revenue stream. In “cost centers” like this, the only way to optimize contributions to the company bottom line was to run them on as little budget as possible.

    For IT, the biggest contributor to budget was the capital expense used to populate a data center. It had to be utilized as efficiently as possible, and that often meant ruthless standardization down to the kinds of languages used or even specific relational databases that could be used by application teams. That often forced upon a populace rigorous project selection processes which line of business teams had to go through in order to get new functionality out of IT, since strict budgeting was a requirement to keep costs under control.

    As an example from my HP IT days, every year Hurd’s executive team would estimate the company revenue for the next financial year company-wide, and Mott would be tasked with running HP IT on 1.8 percent of whatever that number was. No more, no less. Business teams had to submit project proposals as much as 18 months in advance of when they would get executed, each having to project an ROI. Projects were ordinally ranked by ROI and funded in that order until that 1.8 percent of company revenue budget was exhausted. There was an exception process, but most projects not meeting that standard didn’t get funded, end of story.

    Fast forward to 2016, and every company is a software company. By that I mean that every line of business in every company on the planet relies on software innovation in some way to gain market share or increase profits. In every competitive environment, where agile software development has proven to be the best way to nurture breakthrough change, an 18-month software cycle means you go out of business.

    So what do line of business teams do? When they can’t get the speed they need to compete in their respective marketplaces out of their IT department, they turn outside IT where they can get all kinds of assets quickly and easily in an environment that isn’t optimized for cost reduction because of its corporate placement as a cost center. This need for speed created Shadow IT, but it is also the fundamental key when solving for it.

    IT-as-a-Service: The Antidote for Shadow IT
    What line of business teams crave, demand even, is simple: self-service, on-demand provisioning of resources. Why should they wait three weeks’ worth of ticket approvals in order to get a new virtual machine provisioned when they can get one in 10 minutes on AWS? They shouldn’t and they don’t.

    So the answer is equally simple: Give them self-service, on-demand provisioning of resources. The hard part is to do so in a way that aligns with IT security and licensing policies, tracks their usage over time, and bills that usage back to them. Do that and you move IT away from being a cost center and instead turn it into an active participant in the revenue streams of the line of business teams.

    Fortunately, three key toolsets let an IT team easily build a structure that enables exactly that. Infrastructure-as-a-Service (IaaS) offerings like AWS and Microsoft Azure on the public cloud side or OpenStack and vCenter on the private cloud, make it easy to provision virtual machines in minutes. Cloud Management Platforms (CMP) then provide a mechanism to create application templates on top of multiple IaaS offerings.

    Within those application templates, IT can encode things like monitoring, security, and licensing policies to insure that all applications adhere to the strict standards that make for an efficiently run IT deployment in an automated way, regardless of which back end IaaS is used. Those application templates can then be published upstream into IT Service Management (ITSM) tools that provide a shopping cart-like experience for line of business constituents, enforcing rules regarding who is allowed to deploy what applications and where.

    With a solution like this in place, line of business users can browse a catalog of applications and choose what IaaS they get deployed on. The ITSM sends these requests to the CMP, which then automates the application deployments on the IaaS of choice. The CMP monitors the usage of resources on the IaaS and provides usage data back to the IT staff, which can then send that back to the line of business teams for chargebacks.

    When put together in this way, IT gets the control it needs with the ability to dictate the content of the application components and how they behave runtime. The line of business teams get the self-service, on-demand provisioning that is so critical to their success. And, perhaps most importantly, IT no longer becomes a cost center but an innovation enabler that can charge back precise usage to its constituents and participate in revenue success instead of being forced to drive down costs.

    This piece originally appeared on Computer Technology Review.

  3. Setting the Standard—Benchmarking Price-Performance on the Cloud

    2 Comments

    With an increased focus on exploiting a wider variety of business applications on the cloud and a broader choice of available cloud providers, enterprises need to focus on moving applications to the right cloud—not just any cloud or multi-cloud. Such a decision is often driven by factors that include the underlying cloud infrastructure’s capabilities, metrics such as availability and API reliability on the cloud, and compliance conditions including geographic location and security standards.

    While these are important, a key metric towards this decision-making is the application’s price and performance across different cloud providers and cloud instance types. While the driving motivator to adopt clouds is often increased performance, scalability and cost-savings, an application’s price and performance on different clouds are the only true measure for evaluating the cause and affect of selecting the right cloud. Benchmarking clouds cannot therefore be a simple mathematical spreadsheet exercise. Any cloud benchmarking must include key price, performance and other application-centric metrics actually derived from the application being deployed and managed to determine the  “RIGHT” cloud for a given application.

    Every cloud is built, sized and priced very differently, which means that application price and performance varies greatly on different clouds and different configurations within each cloud. Price-performance also varies by different application type, architecture, behavior and usage characteristics. The fact is, despite the market noise, until recently, the ability to easily and simultaneously benchmark price and performance of applications across disparate cloud environments did not exist.

    Cloud infrastructures today do not provide application level SLAs. Any capabilities, performance and price data is purely limited to infrastructure components such as VMs, storage, and networking. These do not translate directly to application price and performance.

    Different clouds have very different underlying physical infrastructure components such as CPU, network backbone, and storage types as well as different virtualization stacks. Moreover, clouds are themselves, variable environments with significant variance in load over time. Different virtualization management including variations in VM placement policies may mean added differences in performance, not just between clouds, but also over time, within the same cloud. In the absence of transparency around VM instance and policies, it is not possible to accurately determine the differences in application performance on different clouds without migrating an application and testing the application performance on each cloud option.

    Moreover, cloud instances are “packaged” and priced very differently as well. Given the above lack of transparency about cloud instances and physical backbone, an apples-to-apples comparison based on infrastructure alone is not possible. For example, what is a “small” instance type on one cloud is rarely the same as a “small” instance type on another cloud— will the vCPU’s on both provide the same performance—or will an equivalently priced “medium” instance on yet another cloud provide a overall better price-performance trade-off? Or maybe it is network performance, not CPU that matters for a particular application. Also, rolling up all the different cloud costs to estimate application costs is not straightforward as cost, performance and instance definition and configuration are inextricably linked. Understanding this and these dependent variables is what is required to understand application performance, and because of the cloud’s utility-based pricing model, better application performance may mean fewer infrastructure resources needed and hence lower pay-per-use costs. It is this type of empirical benchmarking that is required to make informed decisions on where to deploy an application on the cloud.

    Given all this, a plain infrastructure-to-infrastructure comparison is not an effective means to benchmark clouds for application price-performance. As an example, consider a multi-tier web application with a highly transactional database component and with high I/O requirements between the application server and the database tier. Additionally, the application tier may be elastically scalable. A useful performance metric for such an application may be the number of requests it can handle per second while a useful cost-metric would be the total costs of all tiers combined including storage, compute and network costs. Moreover, one may want to test these metrics for different load settings to see how they change as the application scales. A cloud with a high I/O network backbone, an SSD instance type for the database tier and low VM spin-up times may provide better performance for such an application but at a high cost while a different cloud with “standard” options but lower load might provide not too degraded a performance at lower costs for a better overall tradeoff.

    As a different example, consider a highly compute-intensive gene-sequencing application where gene-sequencing jobs may be processed by an elastic cluster. A useful performance metric for such an application may be the time to complete a gene-sequencing job while a useful cost-metric would be the total pay-per-run job cost.

    Accordingly, here are four examples of real-world applications—each of a different architecture type and infrastructure needs. While benchmarks can be done against any Public or Private clouds, for this study, these applications were benchmarked across following clouds with different configurations in terms of instance types and cluster size on each:

    1. HP–HPCS standard.small and standard.2xlarge configuration.
    2. Amazon–AWS m1.medium and m3.2xlarge configuration.
    3. Google–GCE n1-standard-1 and n1-standard-8 configuration.

    The findings of benchmark study are described below with each application type. The charts on the left show application price on the x-axis and performance on the y-axis. The performance criteria can be throughput (number of requests per second) or the total time to complete a workload. The charts on the right show a price-performance index, a single normalized metric to see which cloud and configuration option provides the best “bang for your buck”.

    benchmark-chart1

    Chart #1: Benchmark for three-tier Java Web Application with each tier running on a separate VM.

    benchmark-chart2

    Chart #2: Benchmark for compute-intensive application run in parallel on a cluster.

     benchmark-chart32

    Chart #3: Benchmark results for Hadoop job running on four nodes.benchmark-chart4

    Chart #4: Benchmark results for high performance cluster computing job. 

    To summarize, the benchmark results for four different applications had following results as recommended cloud based on app price-performance trade off. Clearly, there is no single cloud instance that performs best for all types of applications.

    Application Type Medium Configuration Extra Large Configuration Recommended Cloud
    Java Web App Cloud C Cloud B Cloud C Medium Config
    Parallel Processing job Cloud C Cloud B Cloud C Medium with More Nodes
    Hadoop App Cloud A Cloud A Cloud A Extra Large
    High Performance Cluster Computing Job Cloud A/Cloud B Cloud B Cloud B Medium with More Nodes

    As may be clear from such examples, real-world complex enterprise applications need more than a simple spreadsheet-based back-of-the-envelope cost-estimate and infrastructure based performance analysis.

    No wonder that many enterprises today find themselves having migrated to a cloud environment only to discover significant variations in spending and performance than estimated.

    Let’s get back to what matters—finding the right cloud, and yes, clouds do indeed matter. For many reasons, application price and performance in different cloud environments vary greatly. What’s needed is an efficient way to find the right cloud for the application and continue to ensure complete portability so that the application can continue to move to the right cloud, with no additional migration—based on latest performance and price changes across clouds.

CliQr Technologies, CliQr CloudCenter and CloudBlades are trademarks of CliQr Technologies, Inc. All other registered or unregistered trademarks are the sole property of their respective owners.