Close

Request a demo

CliQr's groundbreaking technology makes it fast and efficient for businesses to move, manage and secure applications onto any private, public or hybrid cloud environment.

CliQr is now part of Cisco Learn More About Cisco
Request a Demo

Tag Archive: Cloud

  1. Cloud Application Lifecycle Management

    Leave a Comment

    Enterprises typically have diverse application portfolios, which is why so many are turning to a hybrid cloud strategy. Some applications have variable workloads and low data sensitivity, making them natural fits for public clouds. Others have data that not everybody is comfortable having outside of a corporate firewall and steady state demand, making them better suited for private clouds.

    Regardless of where an application lives, and even if that changes over time, all cloud applications go through a predictable lifecycle that, when optimized with a Cloud Management Platform (CMP), can deliver better value to an organization.

    Why a CMP?

    Most companies have a hybrid cloud strategy precisely because of that application portfolio diversity described above. A problem with such a strategy in a vacuum, though, is that it can then become difficult to bounce around among different cloud consoles gathering information about deployments. Enter a CMP, which provides a single pane of glass through which an administrator can view all clouds where application deployments might land.

    Such tools typically provide governance so that an administrator can dictate who is allowed to deploy what applications where. Metering and billing are important concepts as well so that administrators can put up guiderails for individuals or teams so that they don’t deploy too many resources at once without approvals.

    Gone, though, are the days where it takes three weeks and multiple trouble tickets to get a virtual machine (VM). CMPs provide end users with self-service, on-demand resource provisioning while giving administrators a degree of control. An important aspect of CMP functionality is managing the lifecycle of an individual application, which typically starts with the modeling process.

    Modeling

    The process typically starts well before an application is deployed in some sort of modeling process. Someone with application knowledge—and in this context an “application” can be as simple as a VM with your favorite operating system on it or as complex as a 15-tier behemoth with multiple queuing systems—tells the CMP about what components are a part of an application and how those components interact with each other.

    Image via Cisco (NewsAlert)

     

    Here, as an example, we have a simple three-tier Web application with a local load balancer (HA Proxy), a Web server (Apache), and a database server (MySQL). Each of the components commonly has security, monitoring and other details mandated by a central IT governing authority built into them. That way, any application modeler cannot easily break company-accepted standards.

    When completed, an application model is then ready for deployment. But first, some time must be spent determining where it runs most optimally.

    Placement via Benchmarking

    Some applications are going to have their deployment target clouds dictated by data sensitivity or workload variability as described at the beginning of this article. Others, though, will have flexibility and have their placements based on comparing both price and performance in different clouds. How can you figure out which cloud an application runs best on, and what does “best” even mean?

    That’s where a CMP that offers benchmarking can be helpful. With an application model complete, it can be easy to deploy it multiple times with test data and execute load testing against it to see which cloud, and which instance types on which cloud, offer more throughput. For example:

     

    Image via Cisco

     

    Here, an application model similar to the one discussed in the previous section was deployed across three different public clouds with 2, 4, 8, and 16 (where available) instance types at each of the three tiers. On the Y-axis of this scatterplot we see the number of transactions per second each configuration could handle, and on the X-axis it’s approximate per hour cost. Mousing over each dot would reveal the instance types used in each, but even without that, you can see that as the cost goes up beyond the first two instance types on each cloud, there are no significant throughput gains.

    This means that to choose beyond the 2 or 4 CPU instance types, for this specific application, is a waste of money, and a final decision can be made when weighing whether or not price or performance is most important given the business case at hand.

    Deployment and Monitoring

    With the application model in place and the results of the benchmarking known, a CMP might even enforce the deployment of the application to only the best cloud given the results of the last test. CMPs typically perform rudimentary monitoring for basic counters like CPU utilization but leave more sophisticated analysis to tools like App Dynamics, whose agents can be baked into the application model components for consistent usage.

    Revisiting Placement and Migrating Applications

    But wait, there’s more!

    Public clouds constantly create new instance types, demand for a specific application may wane or grow, private clouds may become more cost-effective with the latest and greatest hardware, and business needs are constantly changing. In other words, the cloud an application is initially deployed on may not be the one it stays on forever. Repeating the benchmarking exercise annually or quarterly is a good idea to detect when it might be time for a change.

    Again, a good CMP should provide the tools to make it easy to back up data from the initial deployment, create a new deployment on a different cloud, restore the data to the new deployment and shut down the old deployment should a migration be necessary.

    Conclusion

    Managing applications in the cloud does not have to be complicated, and given how many aspects influencing an initial deployment choice change over time, application portability is important. Homegrown scripting tools used to manage these phases can grow out of control quickly or limit cloud choice to those a specific team has expertise with. Fortunately, CMPs make it easy to model, benchmark, deploy, monitor and migrate applications as they flow through their natural lifecycle.

    This blog originally appeared on Cloud Computing Magazine.

  2. Top-down vs. Bottom-up Approaches to Moving to Cloud

    Leave a Comment

    Many companies and government agencies adopt a “top-down” approach in setting IT policy, making sure technology is secure before approving it for organization-wide use. Some, conversely, employ a “bottom-up” approach, allowing individuals and offices to innovate, and then adopting those experiments that have successful results to improve the organization-wide technology infrastructure.

    Which is better? Or are both needed? How do you choose an approach to moving to cloud?

    Top-Down

    In the early 2000s, in the wake of the Internet Bubble bursting, the influence of the CIO skyrocketed as they took steps to control costs. Using strategies that were sometimes called “ruthless standardization,” edicts would come from the C-suite about what combinations of technologies could be used in IT. We heard pronouncements like “Thou shalt not use LAMP stacks but instead Java Web applications with Oracle (NewsAlert) databases.”

    The good news about top-down driven technology decisions is that they tend to be cost-effective. With the CIO behind them, pricing from the major technology vendors, cloud Infrastructure-as-a-Service companies here, is usually excellent. The bad news is that they are typically limiting when it comes to innovation. If the CIO decrees that you can only use Oracle databases, for example, you may miss out on the NoSQL trend that opened up a completely new way of thinking about data management. At the beginning of the 21st century, that happened more frequently than people tend to remember.

    What lessons can we learn from the top-down technology decisions of the past when examining cloud adoption? The key to a good top-down approach is flexibility. Technology changes too quickly to rely completely on five-year licensing agreements, but without them costs can spiral out of control. A Cloud First strategy doesn’t mean Cloud Only strategy. Rather, it gives development teams a starting point they can argue out of if they can prove alternatives are more beneficial.

    Bottom-Up

    On the flip side of the coin is a bottom-up approach, where executives rely exclusively on developers to push innovations up the chain of command. In the late 1990s, agile development methodologies were a good example of this. Frustrated with waterfall methods that were standard at the time, and which relied on long release cycles with all requirements being specified before any code was written, developers flocked to a very different paradigm where minimum viable products were built and then iterated over many, much shorter releases. This eventually led to the DevOps and Continuous Integration/Continuous Delivery approaches that are commonplace today.

    Innovation can spring more easily from a bottom-up approach, but it often takes time for what can be competing alternatives to emerge with a winner. And what works for one part of a business may not for another, given contextual differences. For example, should a particular team use Amazon Web Services (NewsAlert) or Microsoft Azure for its public cloud hosting? Ask 10 different development teams this question and you’ll likely get a split vote, with teams basing their opinions on specific features they need that only one vendor provides, or a geographically more advantageous data center that one development team needs but others do not.

    Why Both Are Needed

    In reality, a little bit of both approaches is needed. An executive might make a declaration like what the U.S. CIO did in 2014 for federal agencies with the Cloud First strategy. In other words, having a default position set by upper management but setting criteria under which another technology choice can be made by development teams is the way to go.

    That might mean that the CIO selects (and gets great pricing on) one private cloud and one to two public clouds on which development teams are allowed to deploy workloads as the top-down portion of an approach. But within those clouds, let development teams use whatever derivative services each cloud vendor might provide, along with whatever open source they would like, including making use of more cutting edge technologies like containers or Function-as-a-Service. This gives the bottom-up approach some room to grow innovation, but with some guiderails set by the top-down edict that controls costs without stifling creativity.

    This blog originally appeared on The Cloud Computing Magazine

     

  3. CloudCenter 4.8 Release–Brownfield Import and Action Library

    Leave a Comment

    The big news in March for cloud was the AWS S3 outage that brought down some large pieces of the Internet with it. While the world didn’t end, it definitely caused issues and illustrated the need for a cloud strategy that accounts for vendor failure. But managing multiple clouds, accounts, and capabilities–that sounds too complex, doesn’t it?

    Traditionally, you had only a couple of choices on how to manage hybrid cloud or multi-cloud service delivery, usually causing you to use multiple sets of orchestration and management tools that are technology or cloud-specific. The better approach, though, is to choose a cloud management platform like Cisco CloudCenter to manage your hybrid estate. If you choose a “One Platform” approach, you get more value by using your CMP to manage as many workload types and technologies as possible.

    That brings us to the Cisco CloudCenter 4.8 release, which will release at the end of April.

    This release showcases some great work, building upon previous release themes. Before this release, admins had the ability to view existing VMs via the inventory report, but weren’t able to do anything with them. Additionally, we had a list of “Day 2” operations, but no way to extend that list in a meaningful way.

    4.8 presents the second of our three-phase plan around brownfield and operations by offering new features that broaden the scope of workloads that can be managed by CloudCenter, and extend what users can do via self-service. These changes deliver on the promise of “Any Cloud, One Platform”, and offer an elegant solution for reigning in shadow IT while improving both user self-service capabilities and central IT control. New features include:

    • Brownfield Import VMs Import and manage previously deployed workloads along with new CloudCenter deployed workloads, in both data center and cloud.
    • VM or Application View Flip between VM view and application view, for those that prefer to manage workloads at the VM level.
    • Action Library Define and execute self-service post-deployment management actions further enable self-service within guardrails and that reduce the need for IT help tickets.
    • New Cloud Type – Alibaba Cloud support, across all regions.
    • Native Language Support – Chinese, Japanese, and French.
    • Service Provider License Agreement (SPLA) – Offer to fit with the “fee for service” service providers.
    • Cisco Ecosystem Benefits – The combination of VM import and action library offers improved interaction with Cisco Tetration, UCS Director, and AppDynamics.

    Watch the CloudCenter 4.8 video to learn more.

    Below find details of these new features:

    • Brownfield Import VMs – Users can now import and manage previously deployed workloads alongside new CloudCenter deployments. When admins connect CloudCenter to a cloud or datacenter, they can discover and import pre-existing virtual machines and assign those discovered VMs to the appropriate owner, enabling further action and cost reporting.

    This feature increases the ROI on your decision to use a one platform approach to hybrid IT management. It allows both IT and users to work together to clean up after shadow IT. Application owners can still access and manage their workloads but can now also use a wide range self-service post-deployment actions that previously needed separate tools, direct cloud access, or IT ticket requests.

    • VM or application view – Users can toggle between application list or VM view and now manage at the VM level through CloudCenter. Admins are able to see all imported VMs including unassigned or already assigned to a specific user. Users will get simplified management of managed applications and VMs all from a single interface. This feature offers a fine-grained view of virtual infrastructure for users that prefer to manage at the VM level.
    • Action Library CloudCenter’s full lifecycle management includes definable post-deployment operations via an extensible library of actions. Users can apply these actions to either imported VMs or previously deployed applications, without logging an IT ticket, and without having to learn yet another tool. Actions are contextually aware so that they only display to the end user if they are appropriate for that cloud and state, simplifying IT management and making application owners more efficient without having to know cloud-specific tools and API calls.

    Users can scale up by adding or removing CPU, memory, and disk volumes. Or they can perform other tasks like back-up a database or install an agent. Or even vMotion or “lift and shift” via linked tools—all without the hassle of an IT help request. And users don’t need deep knowledge of cloud-specific APIs or multiple environment-specific management tools.

    Admins easily define or modify actions in a central library. Post-deployment actions can include common management tasks that leverage scripts, commands, environment specific API calls, or even API calls to other tools. Role based access control and context-driven policy rules guide who can use various types of actions in different deployment environments.  So, Admins can provide automation guardrails by controlling access and context on who, how, when, and where actions can be used.

    • New Cloud Type – CloudCenter has added support for Alibaba Cloud. Now users can deploy applications to 13 regions including China, Japan, Singapore, Australia, as well as Germany and three in the United States. Alibaba cloud is making inroads as a low-cost development environment. Cloud price wars are far from over. Which gives you more reason to maintain portability and leave your cloud options open.
    • Native Language support – CloudCenter has expanded local language support with language packs for Chinese, Japanese, and French. Language is automatically selected based on the user’s browser language. And language is applied to all features and walkthroughs.
    • Service Provider License Agreement CloudCenter now has a service provider licensing option that is a better fit for “fee for service” environments. This option is limited to Cisco qualified service providers.
    • Cisco Ecosystem Benefits CloudCenter can now easily extend the value of other Cisco solutions. The combination of Brownfield import and action library delivers additional value to other IT operations tools. CloudCenter can now import VMs and deploy Tetration sensors and AppDynamics agents in both data center and cloud environments. Users can easily initiate Cisco UCS Director workflows to execute common infrastructure management tasks.

    To request a demo, contact your sales team.

    This blog originally appeared on Cisco Blogs

  4. The How-To’s of Cloud Computing: Essential Tips for Maximizing Your Cloud Usage

    Leave a Comment

    Most people seem to think that, from a technology adoption lifecycle perspective, cloud adoption is in the Early Majority, possibly even peaking to the point of being ready for the Late Majority. What that means for most companies is that there are lessons to be learned from those Innovators and Early Adopters who came before them so that cloud usage can be maximized. Here are some essential tips for different functions within cloud usage to keep in mind as you embark upon your own cloud journey.

    Be SaaS-tastic with Table Stakes

    Here is something no one says: “Our competitive advantage is that we have better email than everyone else.” Similar statements can be made about Customer Relationship Management, Human Resources Management, collaboration, and many other pieces of software that any company simply has to have in order to efficiently function in a 21st-century economy. The days of an internal IT department custom building or even operationalizing off-the-shelf software in a private data center are probably gone since there are companies out there that already provide Software-as-a-Service (SaaS), pay-per-use consumption for all of this.

    So, go all in on SaaS for the table stakes applications. They’ll save you money since your expenses on them can match your growth and allow you to focus on your own custom application development that adds value to your competitive advantage.

    Protecting Your Custom Application Investment with Cloud Portability

    Most modern custom application development ends up running on virtual machines in either a public or private cloud. From the application’s perspective, it often doesn’t matter where that hosting takes place so long as a load balancer can connect to the IP addresses of a set of web servers, which can then connect to the IP address of a database server, etc.

    From an operational perspective, it can be useful to use a Cloud Management Platform (CMP) that provides a single pane of management glass across multiple back ends. The Gartner Market Guide on CMPs suggests that reducing cloud lock-in is among the key reasons for using a CMP, and they do so by making it much easier to deploy an application to one cloud, whether private or public, and migrate it someplace else when something in the cloud market, your business priorities, or anything else changes in such a way that you need to alter your hosting strategy. Using a CMP is a way of future-proofing those decisions so you have easier-to-implement choices later.

    What Runs Where, Part 1: Data Sensitivity

    Now that you have your table stakes application needs being met by SaaS and your custom applications are deployed through a CMP, how do you know which cloud should host what application? What should run where?

    The first part of that question has to do with your data. While fears over public cloud security are mostly a distant memory, some people are still uncomfortable having key pieces of data outside walls they own. In other cases, there are regulatory constraints that prevent certain data from resting in a public cloud. Other situations force applications to be on-premises because of latency with other applications. All that said, there are still legitimate data reasons that would lead you to deploying a particular application in your private data center.

    What Runs Where, Part 2: Workload Demand

    Another part of the question at hand has to do with the demand of the workload. If it doesn’t vary a whole lot, the case for private cloud improves since, especially at scale, private infrastructure can be cheaper over the long term. However, if your workload can be turned off or scaled down frequently, the public cloud is difficult to pass up given its per-hour or even per-minute pay-per-use model.

    What Runs Where, Part 3: Benchmark, Benchmark, Benchmark

    If data sensitivity and workload demand guidance don’t leave you with a clear hosting choice for a particular application, the tie-breaker may be to run a mini bake-off with a series of benchmark tests. Any top CMP should give you the ability to run application throughput testing for different instance types on a specific cloud or even across clouds so you can get a price/performance comparison to guide your hosting choice.

    For example, suppose you had a three-tier web application that used a load balancer, a web server, and a database server. Using a CMP, you could run throughput tests on different sized virtual machines on different clouds. On the following graph from one such test, throughput is shown on the Y-axis and approximate cost per hour on the X-axis:

    image00

     

    This set of tests ran a throughput analysis on 2, 4, 8, and 16 CPU instance types across three different clouds and shows, for this specific application, that throughput peaked at the 4 CPU instance types and that AWS was both faster and cheaper than competitors.

    It is important to keep in mind that each application runs slightly differently on each cloud provider, so results per application will vary significantly, but the graph illustrates the point of how important benchmarking can be in making workload placement decisions. And when something changes, like a vendor announcing new pricing or a new instance type, a test can quickly be run from the CMP to see if it makes a difference or not.

    Conclusion

    Enough lessons have been learned by Early Adopters of cloud usage that some solid tips are now available for those who are still at the beginning of their journey. Making extensive use of SaaS for table stakes-type applications allows a focus on custom applications that add value to a business’ bottom line. Using a CMP gives flexibility and future portability of those custom applications that act as an insurance policy against changes that will come later. Private cloud is best for applications with sensitive data or steady workload demand. Public cloud excels for hosting applications without as much sensitivity to data and varying workloads.

    When in doubt, benchmark using your CMP. Put all that together, and you can maximize your cloud usage to your benefit.

    This blog originally appeared on Salesforce Blogs.

  5. Cloud: How Did We Get Here and What’s Next?

    Leave a Comment

    Screen Shot 2017-03-13 at 11.10.47 AM

    It wasn’t too long ago that companies used on-premises solutions for all of their IT and data storage needs. Now, with the growing popularity of Cloud services, the world of IT is rapidly changing. How did we get here? And more importantly, what is the future of IT and data storage?

    It All Starts with Server Utilization

    In the mid-1990s, when HTTP found its way outside of Tim Berners-Lee’s CERN lab and client-server computing emerged as the de facto standard for application architectures, it launched an Internet Boom in which every enterprise application had its own hardware. When you ordered that hardware, you had to think about ordering enough capacity to handle your spikes in demand as well as any high availability needs you might have.

    That resulted in a lot more hardware than you really needed for some random Tuesday in March, but it also ensured that you wouldn’t get fired when the servers crashed under heavy load. Because the Internet was this new and exciting thing, nobody cared that you might be spending too much on capital expense.

    But then the Internet Bubble burst and CFO types suddenly cared a whole lot. Why have two applications sit side by side and use 30 percent of their hardware most days when you could have them both run on the same physical server and utilize more of it on the average day? While that reasoning looks great on a capitalization spreadsheet, what it failed to take into account was that if one application introduced a memory leak, it brought down the other application with it, giving rise to the noisy neighbor problem.

    What if there was another way to separate physical resources in some sort of isolation technique so that you could reduce the chances that applications could bring each other down?

    The Birth of Virtualization and Pets vs. Cattle

    The answer turned out to be the hypervisor, which could isolate resources from one another on a physical machine to create a virtual machine. This technique didn’t completely eliminate the noisy neighbor problem, but it reduced it significantly. Early uses of virtualization enabled IT administrators to better utilize hardware across multiple applications and pool resources in a way that wasn’t possible before.

    But in the early 2000s, developers started to think about their architectures differently. In a physical server-only world, resources are scarce and take months to expand upon. Because of that scarcity, production deployments had to be treated carefully and change control was tight. This era of thinking has come to be known as treating machines as pets, meaning, you give them great care and feeding, oftentimes you give them names, and you go to great lengths to protect them. In a pets-centric world, you were lucky if you released new features quarterly because a change to the system increased the chances that something would fail.

    What if you thought about that differently, though, given that you can create a new virtual machine in minutes as opposed to waiting months for a physical one? Not only does that cause you to think about scaling differently and not plan for peak hardware if the pooled resources are large enough (remember that, it’ll be important later), but you think about deployments differently too.

    Consider the operating system patch upgrade. With pets thinking, you patch the virtual or physical machine that already exists. With this new thinking, treating virtual machines like cattle, you create the new virtual machine with the new patch and shut down the old one. This line of thinking led to more rapid releases and agile software development methodologies. Instead of quarterly releases, you could release hourly if you wanted to, since you now had the ability to introduce change or roll them back more easily. That led to a line of business teams turning to software developers as change agents for increased revenues.

    Cloud: Virtualization Over HTTP and the Emergence of Hybrid Approaches

    If you take the virtualization model to its next logical step, the larger the shared resource pool, the better. Make it large enough and you could share resources with people outside your organization. And since you can create virtual machines in minutes, you could rent them out by the hour. Welcome to the public cloud.

    While there is a ton of innovative work going on in public cloud that takes cattle-based thinking to its extremes, larger companies in particular are noticing that a private cloud is appropriate for some of its applications. Specifically, applications with sensitive data and steady state demand are attractive for private cloud, which still offers the ability to create virtual machines in minutes even though, at the end of the day, you own the capital asset.

    Given this idea that some applications run best on a public cloud while others run best on a private cloud, the concept of a cloud management platform has become popular to help navigate this hybrid cloud world. Typically these tools offer governance, benchmarking, and metering/billing so that a central IT department can put some controls around cloud usage while still giving their constituents in the line of business teams the self-service, on-demand provisioning they demand with cattle-style thinking.

    What’s Next: Chickens and Feathers (Containers and FaaS)

    Virtualization gave us better hardware utilization and helped developers come up with new application architectures that treated application components as disposable entities that can be created and destroyed on a whim, but it doesn’t end there. Containers, which use a lighter weight resource isolation technique than hypervisors do, can be created in seconds—a huge improvement over the minutes it takes to create a virtual machine. This is encouraging developers to think about smaller, more portable components. Some would extend the analogy to call this chickens-style thinking, in the form of microservices.

    What’s better than creating a unit of compute in seconds? To do so in milliseconds, which is what Function-as-a-Service (FaaS) is all about. Sometimes this technology is known as Serverless, which is a bit of a misnomer since there is indeed a server providing the compute services, but what differentiates it from containers is that developers have to know nothing about the hot standby container within which their code runs. That means that a unit of compute can sit on disk when not in use instead of taking up memory cycles waiting for a transaction to come in. While the ramifications of this technology aren’t quite yet understood, a nanoservice approach like this extends the pets vs. cattle vs. chickens analogy to include feathers.

    Conclusion

    Just in the last 25 years or so, our industry has come a remarkably long way. Financial pressures forced applications to run coincident with siblings they might not have anything to do with, but which they could bring crumbling to their knees. Virtualization allowed us to separate resources and enabled developers to think about their application architectures very differently, leading to unprecedented innovation speed. Lighter weight resource isolation techniques make even more rapid innovations possible through containers and microservices. On the horizon, FaaS technologies show potential to push the envelope even further.

    Speed and the ability to adapt to this ever-changing landscape rule the day, and that will be true for a long time to come.

    This blog originally appeared on Cloud Computing Magazine.

CliQr Technologies, CliQr CloudCenter and CloudBlades are trademarks of CliQr Technologies, Inc. All other registered or unregistered trademarks are the sole property of their respective owners.