Close

Request a demo

CliQr's groundbreaking technology makes it fast and efficient for businesses to move, manage and secure applications onto any private, public or hybrid cloud environment.

CliQr is now part of Cisco Learn More About Cisco
Request a Demo

Tag Archive: cloud computing

  1. Cloud: How Did We Get Here and What’s Next?

    Leave a Comment

    Screen Shot 2017-03-13 at 11.10.47 AM

    It wasn’t too long ago that companies used on-premises solutions for all of their IT and data storage needs. Now, with the growing popularity of Cloud services, the world of IT is rapidly changing. How did we get here? And more importantly, what is the future of IT and data storage?

    It All Starts with Server Utilization

    In the mid-1990s, when HTTP found its way outside of Tim Berners-Lee’s CERN lab and client-server computing emerged as the de facto standard for application architectures, it launched an Internet Boom in which every enterprise application had its own hardware. When you ordered that hardware, you had to think about ordering enough capacity to handle your spikes in demand as well as any high availability needs you might have.

    That resulted in a lot more hardware than you really needed for some random Tuesday in March, but it also ensured that you wouldn’t get fired when the servers crashed under heavy load. Because the Internet was this new and exciting thing, nobody cared that you might be spending too much on capital expense.

    But then the Internet Bubble burst and CFO types suddenly cared a whole lot. Why have two applications sit side by side and use 30 percent of their hardware most days when you could have them both run on the same physical server and utilize more of it on the average day? While that reasoning looks great on a capitalization spreadsheet, what it failed to take into account was that if one application introduced a memory leak, it brought down the other application with it, giving rise to the noisy neighbor problem.

    What if there was another way to separate physical resources in some sort of isolation technique so that you could reduce the chances that applications could bring each other down?

    The Birth of Virtualization and Pets vs. Cattle

    The answer turned out to be the hypervisor, which could isolate resources from one another on a physical machine to create a virtual machine. This technique didn’t completely eliminate the noisy neighbor problem, but it reduced it significantly. Early uses of virtualization enabled IT administrators to better utilize hardware across multiple applications and pool resources in a way that wasn’t possible before.

    But in the early 2000s, developers started to think about their architectures differently. In a physical server-only world, resources are scarce and take months to expand upon. Because of that scarcity, production deployments had to be treated carefully and change control was tight. This era of thinking has come to be known as treating machines as pets, meaning, you give them great care and feeding, oftentimes you give them names, and you go to great lengths to protect them. In a pets-centric world, you were lucky if you released new features quarterly because a change to the system increased the chances that something would fail.

    What if you thought about that differently, though, given that you can create a new virtual machine in minutes as opposed to waiting months for a physical one? Not only does that cause you to think about scaling differently and not plan for peak hardware if the pooled resources are large enough (remember that, it’ll be important later), but you think about deployments differently too.

    Consider the operating system patch upgrade. With pets thinking, you patch the virtual or physical machine that already exists. With this new thinking, treating virtual machines like cattle, you create the new virtual machine with the new patch and shut down the old one. This line of thinking led to more rapid releases and agile software development methodologies. Instead of quarterly releases, you could release hourly if you wanted to, since you now had the ability to introduce change or roll them back more easily. That led to a line of business teams turning to software developers as change agents for increased revenues.

    Cloud: Virtualization Over HTTP and the Emergence of Hybrid Approaches

    If you take the virtualization model to its next logical step, the larger the shared resource pool, the better. Make it large enough and you could share resources with people outside your organization. And since you can create virtual machines in minutes, you could rent them out by the hour. Welcome to the public cloud.

    While there is a ton of innovative work going on in public cloud that takes cattle-based thinking to its extremes, larger companies in particular are noticing that a private cloud is appropriate for some of its applications. Specifically, applications with sensitive data and steady state demand are attractive for private cloud, which still offers the ability to create virtual machines in minutes even though, at the end of the day, you own the capital asset.

    Given this idea that some applications run best on a public cloud while others run best on a private cloud, the concept of a cloud management platform has become popular to help navigate this hybrid cloud world. Typically these tools offer governance, benchmarking, and metering/billing so that a central IT department can put some controls around cloud usage while still giving their constituents in the line of business teams the self-service, on-demand provisioning they demand with cattle-style thinking.

    What’s Next: Chickens and Feathers (Containers and FaaS)

    Virtualization gave us better hardware utilization and helped developers come up with new application architectures that treated application components as disposable entities that can be created and destroyed on a whim, but it doesn’t end there. Containers, which use a lighter weight resource isolation technique than hypervisors do, can be created in seconds—a huge improvement over the minutes it takes to create a virtual machine. This is encouraging developers to think about smaller, more portable components. Some would extend the analogy to call this chickens-style thinking, in the form of microservices.

    What’s better than creating a unit of compute in seconds? To do so in milliseconds, which is what Function-as-a-Service (FaaS) is all about. Sometimes this technology is known as Serverless, which is a bit of a misnomer since there is indeed a server providing the compute services, but what differentiates it from containers is that developers have to know nothing about the hot standby container within which their code runs. That means that a unit of compute can sit on disk when not in use instead of taking up memory cycles waiting for a transaction to come in. While the ramifications of this technology aren’t quite yet understood, a nanoservice approach like this extends the pets vs. cattle vs. chickens analogy to include feathers.

    Conclusion

    Just in the last 25 years or so, our industry has come a remarkably long way. Financial pressures forced applications to run coincident with siblings they might not have anything to do with, but which they could bring crumbling to their knees. Virtualization allowed us to separate resources and enabled developers to think about their application architectures very differently, leading to unprecedented innovation speed. Lighter weight resource isolation techniques make even more rapid innovations possible through containers and microservices. On the horizon, FaaS technologies show potential to push the envelope even further.

    Speed and the ability to adapt to this ever-changing landscape rule the day, and that will be true for a long time to come.

    This blog originally appeared on Cloud Computing Magazine.

  2. Managing Applications Across Hybrid Clouds

    Leave a Comment

    Brad-Casemore

     

     

     

     

     

     

     

     

    Guest Author: Brad Casemore

    IDC Research Director, Datacenter Networks

    Whether resident in traditional datacenters or – increasingly – in the cloud, applications remain the means by which digital transformation is brought to fruition and business value is realized. Accordingly, management and orchestration of applications – and not just management of infrastructure resources – are critical to successful digital transformation initiatives.

    IDC research finds that enterprises will continue to run applications in a variety of environments, including traditional datacenters, private clouds, and public clouds. That said, cloud adoption is an expanding element of enterprise IT strategies.

    Watch Video and Read IDC Paper related to this blog!

    In 2016, enterprise adoption of cloud moved into the mainstream, with about 68% of respondents to IDC’s annual CloudView survey indicating they were currently using public or private cloud for more than one or two small applications, a 61% increase over the prior year’s survey.

    Within this context, enterprises want cloud-management solutions that allow them to get full value from their existing IT capabilities as well as from their ongoing and planned cloud initiatives. At the same time, enterprises don’t want to be locked in to a particular platform or cloud. They want the freedom to deploy and manage applications in both their datacenter and in cloud environments, and they want to be able to do so efficiently, securely, and with full control. Ideally, they want the application environment to be dictated exclusively by business requirements and technical applicability rather than by external constraints. This is why enterprises are increasingly wary of tools optimized for a single application environment, and why they are equally skeptical of automation that is hardwired to a specific cloud.

    To be sure, the greatest benefit of having an optimized cloud-application management system is strategic flexibility. In implementing a hybrid IT strategy with consistent multi-cloud application management, enterprise IT can deliver on the full promise of cloud while reducing the complexity, cost, security, governance, and lock-in risks associated with delivering services across mixed environments. As such, there’s no need to worry about cloud-specific APIs or about the threat of cloud lock-in. Instead, enterprises can focus on a service delivery strategy tailored to the needs of the organization, allowing applications to be deployed in the best possible environments.

    An additional benefit is represented by speed and agility. In this respect, enterprises can align operations with agile development, helping accelerate the application development lifecycle. For example, enterprises can boost productivity and decrease time to market by providing developers with self-service portals to provision fully configured application stacks in any environment. Developers can remain focused on customer needs, and not on infrastructure or downstream deployment services.

    To learn more about the challenges and benefits of managing applications across hybrid clouds, and to read about how Cisco CloudCenter responds to those challenges, I invite you to listen read an IDC Technology Spotlight titled, “Avoiding Cloud Lock-In: Managing Applications Across Hybrid Clouds.”

    Watch Video and Read IDC Paper.

    This blog originally appeared on Cisco Blogs

  3. Cloud Agnosticism: Does the Survival of Current Cloud Providers Matter?

    Leave a Comment

    Cloud computing

    Remember that time when S3 went down? How about when SSL certificates in every Azure data centre expired at the same time, bringing every region down at once? Did you enjoy working with an independent public cloud vendor like SoftLayer, only to have them get acquired by IBM?

    Whether you’re dealing with downtime, an acquisition, or an exit from the market entirely, how much does the survival of your cloud provider matter? A lot, actually. And that’s why a cloud agnostic approach to your data and applications is best.

    Settling of the public cloud market

    In recent years, there has been a settling in the public cloud market. Azure and AWS are the big two in that space with Google and IBM close behind, but sometimes organisations explore other options.

    Regional service providers have grown adept at hosting VMware-based clouds, and their intimate knowledge of their customer base enables them to customise the experience in a way that the big guys can’t scale down to quite as well.

    >See also: Want to understand agile working methods? Look at start ups

    Similarly, Digital Ocean continues to be successful across multiple geographic regions by catering to the developer market.

    In fairness to the big guys mentioned above, AWS and Azure have done an excellent job at combatting high availability issues that caused them to suffer highly publicised downtime.

    The truth of the matter is, they possess resources and expertise on a scale that is difficult to match elsewhere, and they now provide great guidance on how to best use their platforms to ensure your own data protection and security.

    But, fool me once, shame on you, fool me twice, shame on me. It doesn’t happen a lot, but it’s not unprecedented for an entire public cloud infrastructure across multiple locations to go down.

    You’ll need to find a way to operate if it happens again or if some unforeseen acquisition misaligns with your enterprise agreements.

    The services trade-off

    One aspect of examining a cloud agnostic approach is the appeal of the extended services that the larger public cloud providers offer, such as managed databases, load balancers, queues, and notification services.

    These services can dramatically accelerate time to market for applications and allow developers to focus on the business logic to solve the problems specific to your needs.

    >See also: Staying ahead of the digital wave

    The trade-off is that their use also tends to lock you into a particular provider, given that there is little to no commonality between such services across different providers.

    Cloud agnostic approaches

    So, how can you take a more cloud agnostic approach to protect yourself from one of these scenarios where your cloud provider goes away, goes down, or gets acquired by someone you aren’t as in sync with? Here are a few approaches to consider:

    1. Multi-cloud backups

    The simplest approach is to simply back up your data to a different provider than the one you use to collect that data in the first place. This also allows you to take advantage of that data by spinning up your applications that depend on it on the second cloud.

    In other words, treat cloud providers like you treat individual private data centres. Run production on one, but back up and have a cold standby on another.

    2. Hybrid cloud applications

    A more complicated but more robust approach is to build your applications in such a way that they have a global load balancer on top of application stacks that run on different clouds.

    Imagine one set of web and database servers running on one cloud, a second set running on another, and a global load balancer that either actively sends traffic to both stacks (in which case the trick is keeping application state synchronised between them) or sends all traffic to one stack but treats the other as a warm standby.

    This approach takes more effort, but shrinks turnaround time to get up and running on the alternate cloud.

    3. Cloud management platforms

    Instead of trying to manage all of this yourself—or to make it easier to choose which of the two approaches discussed here to use on an application-by-application basis — consider enlisting the help of a cloud management platform (CMP).

    >See also: Consolidation: a database prediction

    While still an emerging product family for which Gartner has a Market Guide but not yet a Magic Quadrant, these tools provide a single view of application deployments across different cloud providers and tend to provide an abstraction layer to make it easy to migrate an application from one vendor to another.

    Some provide governance and metering/billing tools so that system administrators can dictate who is allowed to deploy applications to which cloud and put some guide rails on spending. Benchmarking tolls within a CMP can be useful as well so that more direct price/performance comparisons can be made among different vendors.

    Next steps

    There are several ways you can proceed toward a world where you aren’t locked into one cloud vendor and subject to problems that can occur if that vendor has downtime, gets acquired, or disappears.

    Deciding between the time to market speed of utilising cloud-specific services but increasing lock-in is among the most difficult decisions to make when trying to build a cloud agnostic solution.

    That’s where CMPs can help by adding abstraction on top of multiple clouds. Regardless of your approach, giving yourself options as cloud models mature is key to being nimble enough to take advantage of future benefits as they unfold.

    This piece originally appeared on information age

  4. Why Cloud? Justification for Non-techies

    Leave a Comment

    Cloud-door-access-e1447168509180

    Cloud computing is all the rage today, to the point that it feels like you can’t fill out your “buzzword bingo” card at any meeting without using the phrase. There are all kinds of technical reasons why cloud has the market momentum it does, but what if you aren’t swayed by such things? If you’ve seen technology trends come and go, you need non-technical justification for moving your business in any direction, and cloud computing is no different for you.

    So, what is the main justification for business owners to use cloud that doesn’t involve a lot of technical jargon? Let’s get to the bottom line and talk ROI and payback instead.

    Asset Procurement Financials Before Cloud

    If you go back to a simpler time, before virtualization was even a thing — let alone cloud computing — the financial justification process for IT or any other kind of capital asset was pretty much the same:

    1. Spend a lot of money up front on equipment
    2. Wait for that equipment to be installed and configured correctly
    3. Reap gains to your own business based on this new equipment for years to come.

    In this model, it is common to estimate what the expected annual gains were and to calculate a payback period. In other words, how long will it take you to recoup your investment made in Year 0 before the equipment is even installed? When weighing options against one another, those with shorter payback are more attractive than those with longer payback.

    Another way to judge different choices against one another is with an ROI calculation. Take the total anticipated returns, subtract the total investment, divide at difference by the total investment and multiply by 100.

    The difficulty with either payback or ROI approaches is that you are left to estimate the total returns. In other words, you don’t really know what benefits you’ll receive with your large, up-front purchase — you have to estimate it. And since typically this kind of analysis is made over a three-to-five-year period, it can be awhile before you figure out whether or not you’re wrong. And if you are, it can be a very expensive proposition to correct it.

    Enter Cloud: Getting More At-Bats More Quickly

    Instead of having to wait two years to figure out if your estimated returns are correct, wouldn’t it be better to find out sooner? And instead of estimating the returns in the first place, wouldn’t it be better if you could find out the actual benefits sooner? Also, I bet you’d rather pay as you go instead of investing all that money up front and hoping the return comes at all, right?

    These are the real business benefits of cloud. In baseball terminology, it’s about getting more at-bats, or put another way, more cycles with your technology investment by trying options that don’t require the long installation lead times. That allows you to quickly evaluate the benefits of the investment with a smaller up-front investment and either celebrate the genius of your choice or admit defeat and move on to an alternative.

    Think about what that means for the ROI calculation. The lone value in the denominator of that equation is the total investment. Lower that, and whatever number is in the numerator will look better.

    For the payback, this cloud model enables a business not to estimate returns based on a lot of spreadsheet mechanics that are influenced by the sales team trying to get your investment, but instead can be based on your actual use of the technology as soon as possible with an option to stop without financial penalties. That lets you gather more detailed data on your financial returns sooner.

    Cloud Is Not About Tech, It’s About Speedy Investments

    This is the real takeaway here: Speed. In the modern economy, it is more efficient to try technology investments, quickly determine if they deliver the benefits they promise, and move on than it is to go through some long sales cycle that is followed by an even longer installation process to find out whether or not the equipment you purchased was a huge waste of resources or not. It is OK to fail, just do so quickly so you can cross that wrong answer off your list and move closer to whatever the right solution is. Doing that over and over again with solutions that you pay for as you go is a far better use of your budget.

    This piece originally appeared on betanews.

  5. Time flies when you are having fun – New CloudCenter Release

    Leave a Comment

    Time flies when you’re having fun and building great products! Those who have been following CloudCenter (formerly CliQr) know that it’s been about 6 months since we were acquired by Cisco. During that time, we’ve have been extremely busy. Not only was there a lot of normal day to day activities needed to integrate into Cisco’s processes and systems, but the team was also working to crank out another great feature-filled release. I happen to be especially proud of this release since I was it’s my first release in the product manager role for CloudCenter.

    Blog image CC

    Image: CloudCenter combining both cloud and data center in one platform

    With my new role comes some great perks like getting to play with the engineering builds right when a new feature is completed. I’m proud to report that not only is CloudCenter 4.6 now generally available, but it’s a great, big, first release under the Cisco banner. This release delivers an even deeper integration with Cisco ACI, a more simplified networking model across clouds, and an easier deployment experience.

    Deeper ACI integration

    CloudCenter first introduced Cisco ACI integration about a year and a half ago—right before CiscoLive 2015 in San Diego. Naturally, after the acquisition in April, one of the first things we set out to do was to deepen that ACI integration and provide more value to our customers. The 4.6 release’s vision centered around generally increasing networking flexibility. But also giving users the option to use either existing Cisco ACI objects (endpoint groups, bridge domains, and virtual machine managers) or dynamically create new ones.

    The net/net of these new and enhanced ACI features is that CloudCenter with Cisco ACI blows away any other solution to give a seamless experience, no matter if you’re using vSphere, OpenStack, Azure Pack, or any other on-premise IaaS cloud API. Network administrators gain flexibility in configuration, automation during deployment, and control of what end users are able to do via self-service on demand offerings —all without ANY coding to the ACI API. On the flip side, end users get a more consistent and expedited deployment of their application profile from an easy to use, self-service experience.

    Simplified Networking

    Since the acquisition, people keep asking us, “are you going to stay cloud-agnostic?” Fortunately, the answer is “Yes” and there is no plan of that changing.  We continue to refine the list of clouds, versions, and regions we support out of the box. And we continue to add enhancements that support a multi-cloud world. The new “Simplified Networking” configuration works by letting an administrator abstract cloud networks and assign a label based on the network’s technical features.

    As an end user, all you have to do is provide your business requirement for the application you’re deploying. CloudCenter then maps all the technical stuff behind the scenes. Need a “Secure” network in Metapod? CloudCenter will map the application in the background to “Network X”. Instead, if the application is landing on AWS, Azure, vCenter, or any of the other clouds we support, the equivalent of the “Secure” network might be “Network Y”.

    By abstracting each cloud’s networks into a business requirement defined label, it makes end users’ life SO MUCH EASIER. Gone are the days when they have to know about the underlying cloud’s network capabilities. At the same time, administrators get more control and guarantee that applications are being deployed appropriately through policy.

    Deployment Experience

    For those old school CliQr users and admins, you’ll notice some slick new user interfaces in this release. Sticking with our mantra of “make life easy for users and admins”, we added the ability for admins to pre-set and hide certain fields from users on the deployment form, let application tiers be deployed across availability zones within a cloud region, and streamlined the deployment screen flow.

    deployment_environment

    Image: New deployment environment configuration screen

    Above you can see the new deployment environment configuration screen. Note the visible and non-visible settings for each field. If I’m an admin, I love this feature because it means I can lock down and hide fields that my users don’t need to worry about. Less room for error, fewer questions from users, and more smooth sailing!

    There’s a ton more that made it into the CloudCenter 4.6 release and you can find it all in the release notes. In the next 6 months, you can be sure to expect more announcements of our progress, both in feature releases and as we make waves as a new product within Cisco!

    This blog originally appeared on Cisco Blogs.

     

CliQr Technologies, CliQr CloudCenter and CloudBlades are trademarks of CliQr Technologies, Inc. All other registered or unregistered trademarks are the sole property of their respective owners.