Close

Request a demo

CliQr's groundbreaking technology makes it fast and efficient for businesses to move, manage and secure applications onto any private, public or hybrid cloud environment.

CliQr is now part of Cisco Learn More About Cisco
Request a Demo

Tag Archive: cloud application management

  1. Cloud Application Lifecycle Management

    Leave a Comment

    Enterprises typically have diverse application portfolios, which is why so many are turning to a hybrid cloud strategy. Some applications have variable workloads and low data sensitivity, making them natural fits for public clouds. Others have data that not everybody is comfortable having outside of a corporate firewall and steady state demand, making them better suited for private clouds.

    Regardless of where an application lives, and even if that changes over time, all cloud applications go through a predictable lifecycle that, when optimized with a Cloud Management Platform (CMP), can deliver better value to an organization.

    Why a CMP?

    Most companies have a hybrid cloud strategy precisely because of that application portfolio diversity described above. A problem with such a strategy in a vacuum, though, is that it can then become difficult to bounce around among different cloud consoles gathering information about deployments. Enter a CMP, which provides a single pane of glass through which an administrator can view all clouds where application deployments might land.

    Such tools typically provide governance so that an administrator can dictate who is allowed to deploy what applications where. Metering and billing are important concepts as well so that administrators can put up guiderails for individuals or teams so that they don’t deploy too many resources at once without approvals.

    Gone, though, are the days where it takes three weeks and multiple trouble tickets to get a virtual machine (VM). CMPs provide end users with self-service, on-demand resource provisioning while giving administrators a degree of control. An important aspect of CMP functionality is managing the lifecycle of an individual application, which typically starts with the modeling process.

    Modeling

    The process typically starts well before an application is deployed in some sort of modeling process. Someone with application knowledge—and in this context an “application” can be as simple as a VM with your favorite operating system on it or as complex as a 15-tier behemoth with multiple queuing systems—tells the CMP about what components are a part of an application and how those components interact with each other.

    Image via Cisco (NewsAlert)

     

    Here, as an example, we have a simple three-tier Web application with a local load balancer (HA Proxy), a Web server (Apache), and a database server (MySQL). Each of the components commonly has security, monitoring and other details mandated by a central IT governing authority built into them. That way, any application modeler cannot easily break company-accepted standards.

    When completed, an application model is then ready for deployment. But first, some time must be spent determining where it runs most optimally.

    Placement via Benchmarking

    Some applications are going to have their deployment target clouds dictated by data sensitivity or workload variability as described at the beginning of this article. Others, though, will have flexibility and have their placements based on comparing both price and performance in different clouds. How can you figure out which cloud an application runs best on, and what does “best” even mean?

    That’s where a CMP that offers benchmarking can be helpful. With an application model complete, it can be easy to deploy it multiple times with test data and execute load testing against it to see which cloud, and which instance types on which cloud, offer more throughput. For example:

     

    Image via Cisco

     

    Here, an application model similar to the one discussed in the previous section was deployed across three different public clouds with 2, 4, 8, and 16 (where available) instance types at each of the three tiers. On the Y-axis of this scatterplot we see the number of transactions per second each configuration could handle, and on the X-axis it’s approximate per hour cost. Mousing over each dot would reveal the instance types used in each, but even without that, you can see that as the cost goes up beyond the first two instance types on each cloud, there are no significant throughput gains.

    This means that to choose beyond the 2 or 4 CPU instance types, for this specific application, is a waste of money, and a final decision can be made when weighing whether or not price or performance is most important given the business case at hand.

    Deployment and Monitoring

    With the application model in place and the results of the benchmarking known, a CMP might even enforce the deployment of the application to only the best cloud given the results of the last test. CMPs typically perform rudimentary monitoring for basic counters like CPU utilization but leave more sophisticated analysis to tools like App Dynamics, whose agents can be baked into the application model components for consistent usage.

    Revisiting Placement and Migrating Applications

    But wait, there’s more!

    Public clouds constantly create new instance types, demand for a specific application may wane or grow, private clouds may become more cost-effective with the latest and greatest hardware, and business needs are constantly changing. In other words, the cloud an application is initially deployed on may not be the one it stays on forever. Repeating the benchmarking exercise annually or quarterly is a good idea to detect when it might be time for a change.

    Again, a good CMP should provide the tools to make it easy to back up data from the initial deployment, create a new deployment on a different cloud, restore the data to the new deployment and shut down the old deployment should a migration be necessary.

    Conclusion

    Managing applications in the cloud does not have to be complicated, and given how many aspects influencing an initial deployment choice change over time, application portability is important. Homegrown scripting tools used to manage these phases can grow out of control quickly or limit cloud choice to those a specific team has expertise with. Fortunately, CMPs make it easy to model, benchmark, deploy, monitor and migrate applications as they flow through their natural lifecycle.

    This blog originally appeared on Cloud Computing Magazine.

  2. Cloud: How Did We Get Here and What’s Next?

    Leave a Comment

    Screen Shot 2017-03-13 at 11.10.47 AM

    It wasn’t too long ago that companies used on-premises solutions for all of their IT and data storage needs. Now, with the growing popularity of Cloud services, the world of IT is rapidly changing. How did we get here? And more importantly, what is the future of IT and data storage?

    It All Starts with Server Utilization

    In the mid-1990s, when HTTP found its way outside of Tim Berners-Lee’s CERN lab and client-server computing emerged as the de facto standard for application architectures, it launched an Internet Boom in which every enterprise application had its own hardware. When you ordered that hardware, you had to think about ordering enough capacity to handle your spikes in demand as well as any high availability needs you might have.

    That resulted in a lot more hardware than you really needed for some random Tuesday in March, but it also ensured that you wouldn’t get fired when the servers crashed under heavy load. Because the Internet was this new and exciting thing, nobody cared that you might be spending too much on capital expense.

    But then the Internet Bubble burst and CFO types suddenly cared a whole lot. Why have two applications sit side by side and use 30 percent of their hardware most days when you could have them both run on the same physical server and utilize more of it on the average day? While that reasoning looks great on a capitalization spreadsheet, what it failed to take into account was that if one application introduced a memory leak, it brought down the other application with it, giving rise to the noisy neighbor problem.

    What if there was another way to separate physical resources in some sort of isolation technique so that you could reduce the chances that applications could bring each other down?

    The Birth of Virtualization and Pets vs. Cattle

    The answer turned out to be the hypervisor, which could isolate resources from one another on a physical machine to create a virtual machine. This technique didn’t completely eliminate the noisy neighbor problem, but it reduced it significantly. Early uses of virtualization enabled IT administrators to better utilize hardware across multiple applications and pool resources in a way that wasn’t possible before.

    But in the early 2000s, developers started to think about their architectures differently. In a physical server-only world, resources are scarce and take months to expand upon. Because of that scarcity, production deployments had to be treated carefully and change control was tight. This era of thinking has come to be known as treating machines as pets, meaning, you give them great care and feeding, oftentimes you give them names, and you go to great lengths to protect them. In a pets-centric world, you were lucky if you released new features quarterly because a change to the system increased the chances that something would fail.

    What if you thought about that differently, though, given that you can create a new virtual machine in minutes as opposed to waiting months for a physical one? Not only does that cause you to think about scaling differently and not plan for peak hardware if the pooled resources are large enough (remember that, it’ll be important later), but you think about deployments differently too.

    Consider the operating system patch upgrade. With pets thinking, you patch the virtual or physical machine that already exists. With this new thinking, treating virtual machines like cattle, you create the new virtual machine with the new patch and shut down the old one. This line of thinking led to more rapid releases and agile software development methodologies. Instead of quarterly releases, you could release hourly if you wanted to, since you now had the ability to introduce change or roll them back more easily. That led to a line of business teams turning to software developers as change agents for increased revenues.

    Cloud: Virtualization Over HTTP and the Emergence of Hybrid Approaches

    If you take the virtualization model to its next logical step, the larger the shared resource pool, the better. Make it large enough and you could share resources with people outside your organization. And since you can create virtual machines in minutes, you could rent them out by the hour. Welcome to the public cloud.

    While there is a ton of innovative work going on in public cloud that takes cattle-based thinking to its extremes, larger companies in particular are noticing that a private cloud is appropriate for some of its applications. Specifically, applications with sensitive data and steady state demand are attractive for private cloud, which still offers the ability to create virtual machines in minutes even though, at the end of the day, you own the capital asset.

    Given this idea that some applications run best on a public cloud while others run best on a private cloud, the concept of a cloud management platform has become popular to help navigate this hybrid cloud world. Typically these tools offer governance, benchmarking, and metering/billing so that a central IT department can put some controls around cloud usage while still giving their constituents in the line of business teams the self-service, on-demand provisioning they demand with cattle-style thinking.

    What’s Next: Chickens and Feathers (Containers and FaaS)

    Virtualization gave us better hardware utilization and helped developers come up with new application architectures that treated application components as disposable entities that can be created and destroyed on a whim, but it doesn’t end there. Containers, which use a lighter weight resource isolation technique than hypervisors do, can be created in seconds—a huge improvement over the minutes it takes to create a virtual machine. This is encouraging developers to think about smaller, more portable components. Some would extend the analogy to call this chickens-style thinking, in the form of microservices.

    What’s better than creating a unit of compute in seconds? To do so in milliseconds, which is what Function-as-a-Service (FaaS) is all about. Sometimes this technology is known as Serverless, which is a bit of a misnomer since there is indeed a server providing the compute services, but what differentiates it from containers is that developers have to know nothing about the hot standby container within which their code runs. That means that a unit of compute can sit on disk when not in use instead of taking up memory cycles waiting for a transaction to come in. While the ramifications of this technology aren’t quite yet understood, a nanoservice approach like this extends the pets vs. cattle vs. chickens analogy to include feathers.

    Conclusion

    Just in the last 25 years or so, our industry has come a remarkably long way. Financial pressures forced applications to run coincident with siblings they might not have anything to do with, but which they could bring crumbling to their knees. Virtualization allowed us to separate resources and enabled developers to think about their application architectures very differently, leading to unprecedented innovation speed. Lighter weight resource isolation techniques make even more rapid innovations possible through containers and microservices. On the horizon, FaaS technologies show potential to push the envelope even further.

    Speed and the ability to adapt to this ever-changing landscape rule the day, and that will be true for a long time to come.

    This blog originally appeared on Cloud Computing Magazine.

  3. Top 7 Myths About Moving to the Cloud

    Leave a Comment

    If you make your living by selling computer hardware, you’ve probably noticed that the world has changed. It used to be that the people who managed IT hardware in a big company—your buyer—had all the purchasing power, and their constituents in the line-of-business teams had no place else to get IT services. You’d fill up your quota showing up with a newer version of the same hardware you sold the same people three to five years ago and everybody was happy.

    Then AWS happened in 2006. Salesforce.com already happened in 1999. Lots of other IaaS and SaaS vendors started springing up all over the place, and all of a sudden, those constituents your IT buyer had a monopoly on had another place to go for IT services—places that enabled them to move faster—and the three to five-year hardware refresh cycle started to suffer.

    But selling is still selling, and there are a lot of myths out there about why someone like you can’t sell cloud. Let’s debunk a few of them now.

    Myth #1: My customers aren’t moving to cloud.

    If your IT buyer customers haven’t seen decreased budgets for on-premises hardware, consider yourself lucky. In their cloud research survey issued last November, IDG Enterprise reported, “Cloud technology is becoming a stable to organization’s infrastructure as 70% have at least one application in the cloud. This is not the end, as 56% of organizations are still identifying IT operations that are candidates for cloud hosting.” If your IT buyer isn’t at least considering spending on cloud, it’s almost guaranteed that there is Shadow IT going on in their line-of-business teams.

    Myth #2: I don’t see any Shadow IT.

    Hence, the “Shadow” part. As far back as May of 2015, a Brocade survey of 200 CIOs found that over 80 percent had seen some unauthorized provisioning of cloud services. This doesn’t happen because line-of-business teams are evil. It happens because they have a need for speed and IT is great at efficiency and security but typically lousy at doing things quickly.

    Myth #3: My customer workloads have steady demand.

    While it is true that the cloud consumption model works best with varying demand, when you zoom in on almost any workload and examine utilization on nights and weekends, there is almost always a case to be made that any particular workload is variable.

    Myth #4: Only new workloads run in cloud.

    Any new technology takes on a familiar pattern where organizations try less risky, new projects that don’t necessarily impact the bottom line so they have a safe way to experiment. Cloud computing has been no different, but the tide has started to shift to legacy workloads. This past November, ZK Research compared cloud adoption to virtualization adoption by pointing out, “The initial wave of adoption then was about companies trying new things, not mission-critical workloads. Once organizations trusted the technology, major apps migrated. Today, virtualization is a no brainer because it’s a known technology with well-defined best practices. Cloud computing is going through the same trend today. ”

    Myth #5: Cloud is too hard to learn.

    Granted, cloud computing is different than hardware refresh, but as a sales executive it’s still about building relationships. The biggest changes relate to whom you build those relationships with (which now includes line-of-business teams instead of just IT, but that’s who has the money anyway) and utilizing the subscription-based financial model of cloud consumption. (Gartner has a great article explaining the basics.)

    Myth #6: I don’t have relationships with business teams.

    Certainly, there is some overhead involved in building new relationships as opposed to leveraging your existing ones, but increasingly the line-of-business teams are retaining more of their IT budgets so investing that time will pay off. Even CIO Magazine admits that CMOs will spend more than CIOs on technology in 2017. Go to where the money is—it’ll be worth your time.

    Myth #7: I don’t get paid on cloud.

    This one is, admittedly, the trickiest in this list because some of the solution is in the hands of your company as opposed to within your power directly. If you work for a value-added reseller, public cloud vendors have plenty of programs that indeed pay you on cloud. Even if that isn’t the case, though, educating yourself on public/private cloud differences and building those relationships with business teams can help preserve sales for the hardware a customer public cloud ultimately runs on.

    Another step would be to get acquainted with a Cloud Management Platform, which is software that helps an Enterprise manage workloads across both public and private clouds from a single pane of glass. Moving up the stack to this control point can help you stay involved in key decisions your customer makes and put you in the position of a trusted advisor.

    Selling is fundamentally about building relationships, understanding problems and providing solutions. Regardless of the technology encased in the solution, that will always be true. There is a learning curve involved with cloud adoption, meeting new people you haven’t worked with before and potentially adapting to a subscription-based model, but the fundamentals remain the same of providing value to someone who has some roadblock they need help getting around.

    This blog originally appeared on Salesforce Blog

  4. Managing Applications Across Hybrid Clouds

    Leave a Comment

    Brad-Casemore

     

     

     

     

     

     

     

     

    Guest Author: Brad Casemore

    IDC Research Director, Datacenter Networks

    Whether resident in traditional datacenters or – increasingly – in the cloud, applications remain the means by which digital transformation is brought to fruition and business value is realized. Accordingly, management and orchestration of applications – and not just management of infrastructure resources – are critical to successful digital transformation initiatives.

    IDC research finds that enterprises will continue to run applications in a variety of environments, including traditional datacenters, private clouds, and public clouds. That said, cloud adoption is an expanding element of enterprise IT strategies.

    Watch Video and Read IDC Paper related to this blog!

    In 2016, enterprise adoption of cloud moved into the mainstream, with about 68% of respondents to IDC’s annual CloudView survey indicating they were currently using public or private cloud for more than one or two small applications, a 61% increase over the prior year’s survey.

    Within this context, enterprises want cloud-management solutions that allow them to get full value from their existing IT capabilities as well as from their ongoing and planned cloud initiatives. At the same time, enterprises don’t want to be locked in to a particular platform or cloud. They want the freedom to deploy and manage applications in both their datacenter and in cloud environments, and they want to be able to do so efficiently, securely, and with full control. Ideally, they want the application environment to be dictated exclusively by business requirements and technical applicability rather than by external constraints. This is why enterprises are increasingly wary of tools optimized for a single application environment, and why they are equally skeptical of automation that is hardwired to a specific cloud.

    To be sure, the greatest benefit of having an optimized cloud-application management system is strategic flexibility. In implementing a hybrid IT strategy with consistent multi-cloud application management, enterprise IT can deliver on the full promise of cloud while reducing the complexity, cost, security, governance, and lock-in risks associated with delivering services across mixed environments. As such, there’s no need to worry about cloud-specific APIs or about the threat of cloud lock-in. Instead, enterprises can focus on a service delivery strategy tailored to the needs of the organization, allowing applications to be deployed in the best possible environments.

    An additional benefit is represented by speed and agility. In this respect, enterprises can align operations with agile development, helping accelerate the application development lifecycle. For example, enterprises can boost productivity and decrease time to market by providing developers with self-service portals to provision fully configured application stacks in any environment. Developers can remain focused on customer needs, and not on infrastructure or downstream deployment services.

    To learn more about the challenges and benefits of managing applications across hybrid clouds, and to read about how Cisco CloudCenter responds to those challenges, I invite you to listen read an IDC Technology Spotlight titled, “Avoiding Cloud Lock-In: Managing Applications Across Hybrid Clouds.”

    Watch Video and Read IDC Paper.

    This blog originally appeared on Cisco Blogs

  5. Why Cloud? Justification for Non-techies

    Leave a Comment

    Cloud-door-access-e1447168509180

    Cloud computing is all the rage today, to the point that it feels like you can’t fill out your “buzzword bingo” card at any meeting without using the phrase. There are all kinds of technical reasons why cloud has the market momentum it does, but what if you aren’t swayed by such things? If you’ve seen technology trends come and go, you need non-technical justification for moving your business in any direction, and cloud computing is no different for you.

    So, what is the main justification for business owners to use cloud that doesn’t involve a lot of technical jargon? Let’s get to the bottom line and talk ROI and payback instead.

    Asset Procurement Financials Before Cloud

    If you go back to a simpler time, before virtualization was even a thing — let alone cloud computing — the financial justification process for IT or any other kind of capital asset was pretty much the same:

    1. Spend a lot of money up front on equipment
    2. Wait for that equipment to be installed and configured correctly
    3. Reap gains to your own business based on this new equipment for years to come.

    In this model, it is common to estimate what the expected annual gains were and to calculate a payback period. In other words, how long will it take you to recoup your investment made in Year 0 before the equipment is even installed? When weighing options against one another, those with shorter payback are more attractive than those with longer payback.

    Another way to judge different choices against one another is with an ROI calculation. Take the total anticipated returns, subtract the total investment, divide at difference by the total investment and multiply by 100.

    The difficulty with either payback or ROI approaches is that you are left to estimate the total returns. In other words, you don’t really know what benefits you’ll receive with your large, up-front purchase — you have to estimate it. And since typically this kind of analysis is made over a three-to-five-year period, it can be awhile before you figure out whether or not you’re wrong. And if you are, it can be a very expensive proposition to correct it.

    Enter Cloud: Getting More At-Bats More Quickly

    Instead of having to wait two years to figure out if your estimated returns are correct, wouldn’t it be better to find out sooner? And instead of estimating the returns in the first place, wouldn’t it be better if you could find out the actual benefits sooner? Also, I bet you’d rather pay as you go instead of investing all that money up front and hoping the return comes at all, right?

    These are the real business benefits of cloud. In baseball terminology, it’s about getting more at-bats, or put another way, more cycles with your technology investment by trying options that don’t require the long installation lead times. That allows you to quickly evaluate the benefits of the investment with a smaller up-front investment and either celebrate the genius of your choice or admit defeat and move on to an alternative.

    Think about what that means for the ROI calculation. The lone value in the denominator of that equation is the total investment. Lower that, and whatever number is in the numerator will look better.

    For the payback, this cloud model enables a business not to estimate returns based on a lot of spreadsheet mechanics that are influenced by the sales team trying to get your investment, but instead can be based on your actual use of the technology as soon as possible with an option to stop without financial penalties. That lets you gather more detailed data on your financial returns sooner.

    Cloud Is Not About Tech, It’s About Speedy Investments

    This is the real takeaway here: Speed. In the modern economy, it is more efficient to try technology investments, quickly determine if they deliver the benefits they promise, and move on than it is to go through some long sales cycle that is followed by an even longer installation process to find out whether or not the equipment you purchased was a huge waste of resources or not. It is OK to fail, just do so quickly so you can cross that wrong answer off your list and move closer to whatever the right solution is. Doing that over and over again with solutions that you pay for as you go is a far better use of your budget.

    This piece originally appeared on betanews.

CliQr Technologies, CliQr CloudCenter and CloudBlades are trademarks of CliQr Technologies, Inc. All other registered or unregistered trademarks are the sole property of their respective owners.