Close

Request a demo

CliQr's groundbreaking technology makes it fast and efficient for businesses to move, manage and secure applications onto any private, public or hybrid cloud environment.

CliQr is now part of Cisco Learn More About Cisco
Request a Demo

Tag Archive: cloud computing

  1. Cloud Application Lifecycle Management

    Leave a Comment

    Enterprises typically have diverse application portfolios, which is why so many are turning to a hybrid cloud strategy. Some applications have variable workloads and low data sensitivity, making them natural fits for public clouds. Others have data that not everybody is comfortable having outside of a corporate firewall and steady state demand, making them better suited for private clouds.

    Regardless of where an application lives, and even if that changes over time, all cloud applications go through a predictable lifecycle that, when optimized with a Cloud Management Platform (CMP), can deliver better value to an organization.

    Why a CMP?

    Most companies have a hybrid cloud strategy precisely because of that application portfolio diversity described above. A problem with such a strategy in a vacuum, though, is that it can then become difficult to bounce around among different cloud consoles gathering information about deployments. Enter a CMP, which provides a single pane of glass through which an administrator can view all clouds where application deployments might land.

    Such tools typically provide governance so that an administrator can dictate who is allowed to deploy what applications where. Metering and billing are important concepts as well so that administrators can put up guiderails for individuals or teams so that they don’t deploy too many resources at once without approvals.

    Gone, though, are the days where it takes three weeks and multiple trouble tickets to get a virtual machine (VM). CMPs provide end users with self-service, on-demand resource provisioning while giving administrators a degree of control. An important aspect of CMP functionality is managing the lifecycle of an individual application, which typically starts with the modeling process.

    Modeling

    The process typically starts well before an application is deployed in some sort of modeling process. Someone with application knowledge—and in this context an “application” can be as simple as a VM with your favorite operating system on it or as complex as a 15-tier behemoth with multiple queuing systems—tells the CMP about what components are a part of an application and how those components interact with each other.

    Image via Cisco (NewsAlert)

     

    Here, as an example, we have a simple three-tier Web application with a local load balancer (HA Proxy), a Web server (Apache), and a database server (MySQL). Each of the components commonly has security, monitoring and other details mandated by a central IT governing authority built into them. That way, any application modeler cannot easily break company-accepted standards.

    When completed, an application model is then ready for deployment. But first, some time must be spent determining where it runs most optimally.

    Placement via Benchmarking

    Some applications are going to have their deployment target clouds dictated by data sensitivity or workload variability as described at the beginning of this article. Others, though, will have flexibility and have their placements based on comparing both price and performance in different clouds. How can you figure out which cloud an application runs best on, and what does “best” even mean?

    That’s where a CMP that offers benchmarking can be helpful. With an application model complete, it can be easy to deploy it multiple times with test data and execute load testing against it to see which cloud, and which instance types on which cloud, offer more throughput. For example:

     

    Image via Cisco

     

    Here, an application model similar to the one discussed in the previous section was deployed across three different public clouds with 2, 4, 8, and 16 (where available) instance types at each of the three tiers. On the Y-axis of this scatterplot we see the number of transactions per second each configuration could handle, and on the X-axis it’s approximate per hour cost. Mousing over each dot would reveal the instance types used in each, but even without that, you can see that as the cost goes up beyond the first two instance types on each cloud, there are no significant throughput gains.

    This means that to choose beyond the 2 or 4 CPU instance types, for this specific application, is a waste of money, and a final decision can be made when weighing whether or not price or performance is most important given the business case at hand.

    Deployment and Monitoring

    With the application model in place and the results of the benchmarking known, a CMP might even enforce the deployment of the application to only the best cloud given the results of the last test. CMPs typically perform rudimentary monitoring for basic counters like CPU utilization but leave more sophisticated analysis to tools like App Dynamics, whose agents can be baked into the application model components for consistent usage.

    Revisiting Placement and Migrating Applications

    But wait, there’s more!

    Public clouds constantly create new instance types, demand for a specific application may wane or grow, private clouds may become more cost-effective with the latest and greatest hardware, and business needs are constantly changing. In other words, the cloud an application is initially deployed on may not be the one it stays on forever. Repeating the benchmarking exercise annually or quarterly is a good idea to detect when it might be time for a change.

    Again, a good CMP should provide the tools to make it easy to back up data from the initial deployment, create a new deployment on a different cloud, restore the data to the new deployment and shut down the old deployment should a migration be necessary.

    Conclusion

    Managing applications in the cloud does not have to be complicated, and given how many aspects influencing an initial deployment choice change over time, application portability is important. Homegrown scripting tools used to manage these phases can grow out of control quickly or limit cloud choice to those a specific team has expertise with. Fortunately, CMPs make it easy to model, benchmark, deploy, monitor and migrate applications as they flow through their natural lifecycle.

    This blog originally appeared on Cloud Computing Magazine.

  2. Top-down vs. Bottom-up Approaches to Moving to Cloud

    Leave a Comment

    Many companies and government agencies adopt a “top-down” approach in setting IT policy, making sure technology is secure before approving it for organization-wide use. Some, conversely, employ a “bottom-up” approach, allowing individuals and offices to innovate, and then adopting those experiments that have successful results to improve the organization-wide technology infrastructure.

    Which is better? Or are both needed? How do you choose an approach to moving to cloud?

    Top-Down

    In the early 2000s, in the wake of the Internet Bubble bursting, the influence of the CIO skyrocketed as they took steps to control costs. Using strategies that were sometimes called “ruthless standardization,” edicts would come from the C-suite about what combinations of technologies could be used in IT. We heard pronouncements like “Thou shalt not use LAMP stacks but instead Java Web applications with Oracle (NewsAlert) databases.”

    The good news about top-down driven technology decisions is that they tend to be cost-effective. With the CIO behind them, pricing from the major technology vendors, cloud Infrastructure-as-a-Service companies here, is usually excellent. The bad news is that they are typically limiting when it comes to innovation. If the CIO decrees that you can only use Oracle databases, for example, you may miss out on the NoSQL trend that opened up a completely new way of thinking about data management. At the beginning of the 21st century, that happened more frequently than people tend to remember.

    What lessons can we learn from the top-down technology decisions of the past when examining cloud adoption? The key to a good top-down approach is flexibility. Technology changes too quickly to rely completely on five-year licensing agreements, but without them costs can spiral out of control. A Cloud First strategy doesn’t mean Cloud Only strategy. Rather, it gives development teams a starting point they can argue out of if they can prove alternatives are more beneficial.

    Bottom-Up

    On the flip side of the coin is a bottom-up approach, where executives rely exclusively on developers to push innovations up the chain of command. In the late 1990s, agile development methodologies were a good example of this. Frustrated with waterfall methods that were standard at the time, and which relied on long release cycles with all requirements being specified before any code was written, developers flocked to a very different paradigm where minimum viable products were built and then iterated over many, much shorter releases. This eventually led to the DevOps and Continuous Integration/Continuous Delivery approaches that are commonplace today.

    Innovation can spring more easily from a bottom-up approach, but it often takes time for what can be competing alternatives to emerge with a winner. And what works for one part of a business may not for another, given contextual differences. For example, should a particular team use Amazon Web Services (NewsAlert) or Microsoft Azure for its public cloud hosting? Ask 10 different development teams this question and you’ll likely get a split vote, with teams basing their opinions on specific features they need that only one vendor provides, or a geographically more advantageous data center that one development team needs but others do not.

    Why Both Are Needed

    In reality, a little bit of both approaches is needed. An executive might make a declaration like what the U.S. CIO did in 2014 for federal agencies with the Cloud First strategy. In other words, having a default position set by upper management but setting criteria under which another technology choice can be made by development teams is the way to go.

    That might mean that the CIO selects (and gets great pricing on) one private cloud and one to two public clouds on which development teams are allowed to deploy workloads as the top-down portion of an approach. But within those clouds, let development teams use whatever derivative services each cloud vendor might provide, along with whatever open source they would like, including making use of more cutting edge technologies like containers or Function-as-a-Service. This gives the bottom-up approach some room to grow innovation, but with some guiderails set by the top-down edict that controls costs without stifling creativity.

    This blog originally appeared on The Cloud Computing Magazine

     

  3. The How-To’s of Cloud Computing: Essential Tips for Maximizing Your Cloud Usage

    Leave a Comment

    Most people seem to think that, from a technology adoption lifecycle perspective, cloud adoption is in the Early Majority, possibly even peaking to the point of being ready for the Late Majority. What that means for most companies is that there are lessons to be learned from those Innovators and Early Adopters who came before them so that cloud usage can be maximized. Here are some essential tips for different functions within cloud usage to keep in mind as you embark upon your own cloud journey.

    Be SaaS-tastic with Table Stakes

    Here is something no one says: “Our competitive advantage is that we have better email than everyone else.” Similar statements can be made about Customer Relationship Management, Human Resources Management, collaboration, and many other pieces of software that any company simply has to have in order to efficiently function in a 21st-century economy. The days of an internal IT department custom building or even operationalizing off-the-shelf software in a private data center are probably gone since there are companies out there that already provide Software-as-a-Service (SaaS), pay-per-use consumption for all of this.

    So, go all in on SaaS for the table stakes applications. They’ll save you money since your expenses on them can match your growth and allow you to focus on your own custom application development that adds value to your competitive advantage.

    Protecting Your Custom Application Investment with Cloud Portability

    Most modern custom application development ends up running on virtual machines in either a public or private cloud. From the application’s perspective, it often doesn’t matter where that hosting takes place so long as a load balancer can connect to the IP addresses of a set of web servers, which can then connect to the IP address of a database server, etc.

    From an operational perspective, it can be useful to use a Cloud Management Platform (CMP) that provides a single pane of management glass across multiple back ends. The Gartner Market Guide on CMPs suggests that reducing cloud lock-in is among the key reasons for using a CMP, and they do so by making it much easier to deploy an application to one cloud, whether private or public, and migrate it someplace else when something in the cloud market, your business priorities, or anything else changes in such a way that you need to alter your hosting strategy. Using a CMP is a way of future-proofing those decisions so you have easier-to-implement choices later.

    What Runs Where, Part 1: Data Sensitivity

    Now that you have your table stakes application needs being met by SaaS and your custom applications are deployed through a CMP, how do you know which cloud should host what application? What should run where?

    The first part of that question has to do with your data. While fears over public cloud security are mostly a distant memory, some people are still uncomfortable having key pieces of data outside walls they own. In other cases, there are regulatory constraints that prevent certain data from resting in a public cloud. Other situations force applications to be on-premises because of latency with other applications. All that said, there are still legitimate data reasons that would lead you to deploying a particular application in your private data center.

    What Runs Where, Part 2: Workload Demand

    Another part of the question at hand has to do with the demand of the workload. If it doesn’t vary a whole lot, the case for private cloud improves since, especially at scale, private infrastructure can be cheaper over the long term. However, if your workload can be turned off or scaled down frequently, the public cloud is difficult to pass up given its per-hour or even per-minute pay-per-use model.

    What Runs Where, Part 3: Benchmark, Benchmark, Benchmark

    If data sensitivity and workload demand guidance don’t leave you with a clear hosting choice for a particular application, the tie-breaker may be to run a mini bake-off with a series of benchmark tests. Any top CMP should give you the ability to run application throughput testing for different instance types on a specific cloud or even across clouds so you can get a price/performance comparison to guide your hosting choice.

    For example, suppose you had a three-tier web application that used a load balancer, a web server, and a database server. Using a CMP, you could run throughput tests on different sized virtual machines on different clouds. On the following graph from one such test, throughput is shown on the Y-axis and approximate cost per hour on the X-axis:

    image00

     

    This set of tests ran a throughput analysis on 2, 4, 8, and 16 CPU instance types across three different clouds and shows, for this specific application, that throughput peaked at the 4 CPU instance types and that AWS was both faster and cheaper than competitors.

    It is important to keep in mind that each application runs slightly differently on each cloud provider, so results per application will vary significantly, but the graph illustrates the point of how important benchmarking can be in making workload placement decisions. And when something changes, like a vendor announcing new pricing or a new instance type, a test can quickly be run from the CMP to see if it makes a difference or not.

    Conclusion

    Enough lessons have been learned by Early Adopters of cloud usage that some solid tips are now available for those who are still at the beginning of their journey. Making extensive use of SaaS for table stakes-type applications allows a focus on custom applications that add value to a business’ bottom line. Using a CMP gives flexibility and future portability of those custom applications that act as an insurance policy against changes that will come later. Private cloud is best for applications with sensitive data or steady workload demand. Public cloud excels for hosting applications without as much sensitivity to data and varying workloads.

    When in doubt, benchmark using your CMP. Put all that together, and you can maximize your cloud usage to your benefit.

    This blog originally appeared on Salesforce Blogs.

  4. Cloud: How Did We Get Here and What’s Next?

    Leave a Comment

    Screen Shot 2017-03-13 at 11.10.47 AM

    It wasn’t too long ago that companies used on-premises solutions for all of their IT and data storage needs. Now, with the growing popularity of Cloud services, the world of IT is rapidly changing. How did we get here? And more importantly, what is the future of IT and data storage?

    It All Starts with Server Utilization

    In the mid-1990s, when HTTP found its way outside of Tim Berners-Lee’s CERN lab and client-server computing emerged as the de facto standard for application architectures, it launched an Internet Boom in which every enterprise application had its own hardware. When you ordered that hardware, you had to think about ordering enough capacity to handle your spikes in demand as well as any high availability needs you might have.

    That resulted in a lot more hardware than you really needed for some random Tuesday in March, but it also ensured that you wouldn’t get fired when the servers crashed under heavy load. Because the Internet was this new and exciting thing, nobody cared that you might be spending too much on capital expense.

    But then the Internet Bubble burst and CFO types suddenly cared a whole lot. Why have two applications sit side by side and use 30 percent of their hardware most days when you could have them both run on the same physical server and utilize more of it on the average day? While that reasoning looks great on a capitalization spreadsheet, what it failed to take into account was that if one application introduced a memory leak, it brought down the other application with it, giving rise to the noisy neighbor problem.

    What if there was another way to separate physical resources in some sort of isolation technique so that you could reduce the chances that applications could bring each other down?

    The Birth of Virtualization and Pets vs. Cattle

    The answer turned out to be the hypervisor, which could isolate resources from one another on a physical machine to create a virtual machine. This technique didn’t completely eliminate the noisy neighbor problem, but it reduced it significantly. Early uses of virtualization enabled IT administrators to better utilize hardware across multiple applications and pool resources in a way that wasn’t possible before.

    But in the early 2000s, developers started to think about their architectures differently. In a physical server-only world, resources are scarce and take months to expand upon. Because of that scarcity, production deployments had to be treated carefully and change control was tight. This era of thinking has come to be known as treating machines as pets, meaning, you give them great care and feeding, oftentimes you give them names, and you go to great lengths to protect them. In a pets-centric world, you were lucky if you released new features quarterly because a change to the system increased the chances that something would fail.

    What if you thought about that differently, though, given that you can create a new virtual machine in minutes as opposed to waiting months for a physical one? Not only does that cause you to think about scaling differently and not plan for peak hardware if the pooled resources are large enough (remember that, it’ll be important later), but you think about deployments differently too.

    Consider the operating system patch upgrade. With pets thinking, you patch the virtual or physical machine that already exists. With this new thinking, treating virtual machines like cattle, you create the new virtual machine with the new patch and shut down the old one. This line of thinking led to more rapid releases and agile software development methodologies. Instead of quarterly releases, you could release hourly if you wanted to, since you now had the ability to introduce change or roll them back more easily. That led to a line of business teams turning to software developers as change agents for increased revenues.

    Cloud: Virtualization Over HTTP and the Emergence of Hybrid Approaches

    If you take the virtualization model to its next logical step, the larger the shared resource pool, the better. Make it large enough and you could share resources with people outside your organization. And since you can create virtual machines in minutes, you could rent them out by the hour. Welcome to the public cloud.

    While there is a ton of innovative work going on in public cloud that takes cattle-based thinking to its extremes, larger companies in particular are noticing that a private cloud is appropriate for some of its applications. Specifically, applications with sensitive data and steady state demand are attractive for private cloud, which still offers the ability to create virtual machines in minutes even though, at the end of the day, you own the capital asset.

    Given this idea that some applications run best on a public cloud while others run best on a private cloud, the concept of a cloud management platform has become popular to help navigate this hybrid cloud world. Typically these tools offer governance, benchmarking, and metering/billing so that a central IT department can put some controls around cloud usage while still giving their constituents in the line of business teams the self-service, on-demand provisioning they demand with cattle-style thinking.

    What’s Next: Chickens and Feathers (Containers and FaaS)

    Virtualization gave us better hardware utilization and helped developers come up with new application architectures that treated application components as disposable entities that can be created and destroyed on a whim, but it doesn’t end there. Containers, which use a lighter weight resource isolation technique than hypervisors do, can be created in seconds—a huge improvement over the minutes it takes to create a virtual machine. This is encouraging developers to think about smaller, more portable components. Some would extend the analogy to call this chickens-style thinking, in the form of microservices.

    What’s better than creating a unit of compute in seconds? To do so in milliseconds, which is what Function-as-a-Service (FaaS) is all about. Sometimes this technology is known as Serverless, which is a bit of a misnomer since there is indeed a server providing the compute services, but what differentiates it from containers is that developers have to know nothing about the hot standby container within which their code runs. That means that a unit of compute can sit on disk when not in use instead of taking up memory cycles waiting for a transaction to come in. While the ramifications of this technology aren’t quite yet understood, a nanoservice approach like this extends the pets vs. cattle vs. chickens analogy to include feathers.

    Conclusion

    Just in the last 25 years or so, our industry has come a remarkably long way. Financial pressures forced applications to run coincident with siblings they might not have anything to do with, but which they could bring crumbling to their knees. Virtualization allowed us to separate resources and enabled developers to think about their application architectures very differently, leading to unprecedented innovation speed. Lighter weight resource isolation techniques make even more rapid innovations possible through containers and microservices. On the horizon, FaaS technologies show potential to push the envelope even further.

    Speed and the ability to adapt to this ever-changing landscape rule the day, and that will be true for a long time to come.

    This blog originally appeared on Cloud Computing Magazine.

  5. Managing Applications Across Hybrid Clouds

    Leave a Comment

    Brad-Casemore

     

     

     

     

     

     

     

     

    Guest Author: Brad Casemore

    IDC Research Director, Datacenter Networks

    Whether resident in traditional datacenters or – increasingly – in the cloud, applications remain the means by which digital transformation is brought to fruition and business value is realized. Accordingly, management and orchestration of applications – and not just management of infrastructure resources – are critical to successful digital transformation initiatives.

    IDC research finds that enterprises will continue to run applications in a variety of environments, including traditional datacenters, private clouds, and public clouds. That said, cloud adoption is an expanding element of enterprise IT strategies.

    Watch Video and Read IDC Paper related to this blog!

    In 2016, enterprise adoption of cloud moved into the mainstream, with about 68% of respondents to IDC’s annual CloudView survey indicating they were currently using public or private cloud for more than one or two small applications, a 61% increase over the prior year’s survey.

    Within this context, enterprises want cloud-management solutions that allow them to get full value from their existing IT capabilities as well as from their ongoing and planned cloud initiatives. At the same time, enterprises don’t want to be locked in to a particular platform or cloud. They want the freedom to deploy and manage applications in both their datacenter and in cloud environments, and they want to be able to do so efficiently, securely, and with full control. Ideally, they want the application environment to be dictated exclusively by business requirements and technical applicability rather than by external constraints. This is why enterprises are increasingly wary of tools optimized for a single application environment, and why they are equally skeptical of automation that is hardwired to a specific cloud.

    To be sure, the greatest benefit of having an optimized cloud-application management system is strategic flexibility. In implementing a hybrid IT strategy with consistent multi-cloud application management, enterprise IT can deliver on the full promise of cloud while reducing the complexity, cost, security, governance, and lock-in risks associated with delivering services across mixed environments. As such, there’s no need to worry about cloud-specific APIs or about the threat of cloud lock-in. Instead, enterprises can focus on a service delivery strategy tailored to the needs of the organization, allowing applications to be deployed in the best possible environments.

    An additional benefit is represented by speed and agility. In this respect, enterprises can align operations with agile development, helping accelerate the application development lifecycle. For example, enterprises can boost productivity and decrease time to market by providing developers with self-service portals to provision fully configured application stacks in any environment. Developers can remain focused on customer needs, and not on infrastructure or downstream deployment services.

    To learn more about the challenges and benefits of managing applications across hybrid clouds, and to read about how Cisco CloudCenter responds to those challenges, I invite you to listen read an IDC Technology Spotlight titled, “Avoiding Cloud Lock-In: Managing Applications Across Hybrid Clouds.”

    Watch Video and Read IDC Paper.

    This blog originally appeared on Cisco Blogs

CliQr Technologies, CliQr CloudCenter and CloudBlades are trademarks of CliQr Technologies, Inc. All other registered or unregistered trademarks are the sole property of their respective owners.