Close

Request a demo

CliQr's groundbreaking technology makes it fast and efficient for businesses to move, manage and secure applications onto any private, public or hybrid cloud environment.

CliQr is now part of Cisco Learn More About Cisco
Request a Demo

Tag Archive: Cloud

  1. Cloud: How Did We Get Here and What’s Next?

    Leave a Comment

    Screen Shot 2017-03-13 at 11.10.47 AM

    It wasn’t too long ago that companies used on-premises solutions for all of their IT and data storage needs. Now, with the growing popularity of Cloud services, the world of IT is rapidly changing. How did we get here? And more importantly, what is the future of IT and data storage?

    It All Starts with Server Utilization

    In the mid-1990s, when HTTP found its way outside of Tim Berners-Lee’s CERN lab and client-server computing emerged as the de facto standard for application architectures, it launched an Internet Boom in which every enterprise application had its own hardware. When you ordered that hardware, you had to think about ordering enough capacity to handle your spikes in demand as well as any high availability needs you might have.

    That resulted in a lot more hardware than you really needed for some random Tuesday in March, but it also ensured that you wouldn’t get fired when the servers crashed under heavy load. Because the Internet was this new and exciting thing, nobody cared that you might be spending too much on capital expense.

    But then the Internet Bubble burst and CFO types suddenly cared a whole lot. Why have two applications sit side by side and use 30 percent of their hardware most days when you could have them both run on the same physical server and utilize more of it on the average day? While that reasoning looks great on a capitalization spreadsheet, what it failed to take into account was that if one application introduced a memory leak, it brought down the other application with it, giving rise to the noisy neighbor problem.

    What if there was another way to separate physical resources in some sort of isolation technique so that you could reduce the chances that applications could bring each other down?

    The Birth of Virtualization and Pets vs. Cattle

    The answer turned out to be the hypervisor, which could isolate resources from one another on a physical machine to create a virtual machine. This technique didn’t completely eliminate the noisy neighbor problem, but it reduced it significantly. Early uses of virtualization enabled IT administrators to better utilize hardware across multiple applications and pool resources in a way that wasn’t possible before.

    But in the early 2000s, developers started to think about their architectures differently. In a physical server-only world, resources are scarce and take months to expand upon. Because of that scarcity, production deployments had to be treated carefully and change control was tight. This era of thinking has come to be known as treating machines as pets, meaning, you give them great care and feeding, oftentimes you give them names, and you go to great lengths to protect them. In a pets-centric world, you were lucky if you released new features quarterly because a change to the system increased the chances that something would fail.

    What if you thought about that differently, though, given that you can create a new virtual machine in minutes as opposed to waiting months for a physical one? Not only does that cause you to think about scaling differently and not plan for peak hardware if the pooled resources are large enough (remember that, it’ll be important later), but you think about deployments differently too.

    Consider the operating system patch upgrade. With pets thinking, you patch the virtual or physical machine that already exists. With this new thinking, treating virtual machines like cattle, you create the new virtual machine with the new patch and shut down the old one. This line of thinking led to more rapid releases and agile software development methodologies. Instead of quarterly releases, you could release hourly if you wanted to, since you now had the ability to introduce change or roll them back more easily. That led to a line of business teams turning to software developers as change agents for increased revenues.

    Cloud: Virtualization Over HTTP and the Emergence of Hybrid Approaches

    If you take the virtualization model to its next logical step, the larger the shared resource pool, the better. Make it large enough and you could share resources with people outside your organization. And since you can create virtual machines in minutes, you could rent them out by the hour. Welcome to the public cloud.

    While there is a ton of innovative work going on in public cloud that takes cattle-based thinking to its extremes, larger companies in particular are noticing that a private cloud is appropriate for some of its applications. Specifically, applications with sensitive data and steady state demand are attractive for private cloud, which still offers the ability to create virtual machines in minutes even though, at the end of the day, you own the capital asset.

    Given this idea that some applications run best on a public cloud while others run best on a private cloud, the concept of a cloud management platform has become popular to help navigate this hybrid cloud world. Typically these tools offer governance, benchmarking, and metering/billing so that a central IT department can put some controls around cloud usage while still giving their constituents in the line of business teams the self-service, on-demand provisioning they demand with cattle-style thinking.

    What’s Next: Chickens and Feathers (Containers and FaaS)

    Virtualization gave us better hardware utilization and helped developers come up with new application architectures that treated application components as disposable entities that can be created and destroyed on a whim, but it doesn’t end there. Containers, which use a lighter weight resource isolation technique than hypervisors do, can be created in seconds—a huge improvement over the minutes it takes to create a virtual machine. This is encouraging developers to think about smaller, more portable components. Some would extend the analogy to call this chickens-style thinking, in the form of microservices.

    What’s better than creating a unit of compute in seconds? To do so in milliseconds, which is what Function-as-a-Service (FaaS) is all about. Sometimes this technology is known as Serverless, which is a bit of a misnomer since there is indeed a server providing the compute services, but what differentiates it from containers is that developers have to know nothing about the hot standby container within which their code runs. That means that a unit of compute can sit on disk when not in use instead of taking up memory cycles waiting for a transaction to come in. While the ramifications of this technology aren’t quite yet understood, a nanoservice approach like this extends the pets vs. cattle vs. chickens analogy to include feathers.

    Conclusion

    Just in the last 25 years or so, our industry has come a remarkably long way. Financial pressures forced applications to run coincident with siblings they might not have anything to do with, but which they could bring crumbling to their knees. Virtualization allowed us to separate resources and enabled developers to think about their application architectures very differently, leading to unprecedented innovation speed. Lighter weight resource isolation techniques make even more rapid innovations possible through containers and microservices. On the horizon, FaaS technologies show potential to push the envelope even further.

    Speed and the ability to adapt to this ever-changing landscape rule the day, and that will be true for a long time to come.

    This blog originally appeared on Cloud Computing Magazine.

  2. Top 7 Myths About Moving to the Cloud

    Leave a Comment

    If you make your living by selling computer hardware, you’ve probably noticed that the world has changed. It used to be that the people who managed IT hardware in a big company—your buyer—had all the purchasing power, and their constituents in the line-of-business teams had no place else to get IT services. You’d fill up your quota showing up with a newer version of the same hardware you sold the same people three to five years ago and everybody was happy.

    Then AWS happened in 2006. Salesforce.com already happened in 1999. Lots of other IaaS and SaaS vendors started springing up all over the place, and all of a sudden, those constituents your IT buyer had a monopoly on had another place to go for IT services—places that enabled them to move faster—and the three to five-year hardware refresh cycle started to suffer.

    But selling is still selling, and there are a lot of myths out there about why someone like you can’t sell cloud. Let’s debunk a few of them now.

    Myth #1: My customers aren’t moving to cloud.

    If your IT buyer customers haven’t seen decreased budgets for on-premises hardware, consider yourself lucky. In their cloud research survey issued last November, IDG Enterprise reported, “Cloud technology is becoming a stable to organization’s infrastructure as 70% have at least one application in the cloud. This is not the end, as 56% of organizations are still identifying IT operations that are candidates for cloud hosting.” If your IT buyer isn’t at least considering spending on cloud, it’s almost guaranteed that there is Shadow IT going on in their line-of-business teams.

    Myth #2: I don’t see any Shadow IT.

    Hence, the “Shadow” part. As far back as May of 2015, a Brocade survey of 200 CIOs found that over 80 percent had seen some unauthorized provisioning of cloud services. This doesn’t happen because line-of-business teams are evil. It happens because they have a need for speed and IT is great at efficiency and security but typically lousy at doing things quickly.

    Myth #3: My customer workloads have steady demand.

    While it is true that the cloud consumption model works best with varying demand, when you zoom in on almost any workload and examine utilization on nights and weekends, there is almost always a case to be made that any particular workload is variable.

    Myth #4: Only new workloads run in cloud.

    Any new technology takes on a familiar pattern where organizations try less risky, new projects that don’t necessarily impact the bottom line so they have a safe way to experiment. Cloud computing has been no different, but the tide has started to shift to legacy workloads. This past November, ZK Research compared cloud adoption to virtualization adoption by pointing out, “The initial wave of adoption then was about companies trying new things, not mission-critical workloads. Once organizations trusted the technology, major apps migrated. Today, virtualization is a no brainer because it’s a known technology with well-defined best practices. Cloud computing is going through the same trend today. ”

    Myth #5: Cloud is too hard to learn.

    Granted, cloud computing is different than hardware refresh, but as a sales executive it’s still about building relationships. The biggest changes relate to whom you build those relationships with (which now includes line-of-business teams instead of just IT, but that’s who has the money anyway) and utilizing the subscription-based financial model of cloud consumption. (Gartner has a great article explaining the basics.)

    Myth #6: I don’t have relationships with business teams.

    Certainly, there is some overhead involved in building new relationships as opposed to leveraging your existing ones, but increasingly the line-of-business teams are retaining more of their IT budgets so investing that time will pay off. Even CIO Magazine admits that CMOs will spend more than CIOs on technology in 2017. Go to where the money is—it’ll be worth your time.

    Myth #7: I don’t get paid on cloud.

    This one is, admittedly, the trickiest in this list because some of the solution is in the hands of your company as opposed to within your power directly. If you work for a value-added reseller, public cloud vendors have plenty of programs that indeed pay you on cloud. Even if that isn’t the case, though, educating yourself on public/private cloud differences and building those relationships with business teams can help preserve sales for the hardware a customer public cloud ultimately runs on.

    Another step would be to get acquainted with a Cloud Management Platform, which is software that helps an Enterprise manage workloads across both public and private clouds from a single pane of glass. Moving up the stack to this control point can help you stay involved in key decisions your customer makes and put you in the position of a trusted advisor.

    Selling is fundamentally about building relationships, understanding problems and providing solutions. Regardless of the technology encased in the solution, that will always be true. There is a learning curve involved with cloud adoption, meeting new people you haven’t worked with before and potentially adapting to a subscription-based model, but the fundamentals remain the same of providing value to someone who has some roadblock they need help getting around.

    This blog originally appeared on Salesforce Blog

  3. An OSI Model for Cloud

    Leave a Comment

    In 1984, after years of having separate thoughts on networking standards, the International Organization for Standardization (ISO) and the International Telegraph and Telephone Consultative Committee (CCITT) jointly published the Open Systems Interconnection Reference Model, more commonly known as the OSI model.  In the more than three decades that have passed since its inception, the OSI model has given millions of technologists a frame of reference to work from when discussing networking, which has worked out pretty well for Cisco.

    osi-550x425

    Cloud technologies have progressed in recent years that a similar model is now suitable as different audiences have very different interests in the components that make up a cloud stack and understanding the boundaries of those components with common terminology can go a long way towards more efficient conversations.

    Layer 1: Infrastructure

    Analogous to the Physical layer in the OSI model, Layer 1 here refers to the Infrastructure that sits in a data center to provide the foundation for the remainder of the stack.  Corporate data centers and colocation providers have been running this Infrastructure layer for years and are experts at “racking and stacking” pieces of hardware within this layer for maximum efficiency of physical space, heating/cooling, power, and networking to the outside world.

    Layer 2: Hypervisor

    Commonly installed on top of that Infrastructure layer is some sort of virtualization, commonly provided by a Hypervisor.  This enables systems administrators to chunk up use of the physical assets into Virtual Machines (VMs) that can be bin packed onto physical machines for greater efficiency.  Prior to the advent of the Hypervisor layer, components higher up the stack had to wait weeks to months for new Infrastructure to become available, but with the virtualization provided at this layer, virtualized assets become available in minutes.

    Layer 3: Software-Defined Data Center (SDDC)

    Resource pooling, usage tracking, and governance on top of the Hypervisor layer give rise to the Software-Defined Data Center (SDDC).  The notion of “infrastructure as code” becomes possible at this layer through the use of REST APIs.  Users at this layer are typically agnostic to Infrastructure and Hypervisor specifics below them and have grow accustomed to thinking of compute, network, and storage resources as simply being available whenever they want.

    Layer 4: Image

    Here, a bias towards compute resources (as opposed to network or storage) becomes apparent as Image connotes use of particular operating systems and other pre-installed software components.  Format can be an issue here as not all SDDCs support the same types of Images (.OVA vs .AMI, etc.), but most operating systems can be baked into different kinds of Images to run on each popular SDDC.  Developers will sometimes get involved at this layer, but not nearly as much as the two layers yet to come.

    Layer 5: Services

    Application architectures are typically built on top of a set of common middleware components like data bases, load balancers, web servers, message queues, email services, other notification methods, etc.  This Service layer is where those are defined, on top of particular Images from the layer below.  Sometimes these Services manifest themselves as open source installed on a VM or container, such as MySQL to give a database example.  Other times the SDDC may offer an API for accessing components from a pool of Services such as AWS RDS, but underneath that API those components are still built upon an Image and the other layers that precede it.

    Layer 6: Applications

    The final layer is where end users interact with the stack through deployed Applications that are comprised of custom code that makes use of various Services defined below it.

    Now What?

    Whether in a technical conversation or a sales engagement, understanding what layer in this stack a specific person has expertise is important.  Someone who implemented a Hypervisor before the SDDC layer became widely available, for example, has a very different view of the world than someone who has never known a world where the SDDC did not exist.  Experts at each layer in this stack have bias and often lack of understanding for those working at other layers in the stack.  Admitting that and having a framework for all to understand how their part of the world makes up the whole leads to better conversations because everyone understands everyone else’s motivations and places of intersection far better.

     

    This blog originally appeared on Cisco Blogs

  4. Shadow IT: How Big Is the Problem and Why Is ITaaS the Antidote?

    Leave a Comment

    How big is your Shadow IT problem?

    Don’t have one, you say?

    Are you sure? Have you checked those corporate AMEX records lately? They probably have entries on them for places like Amazon Web Services and Heroku. In other words, you probably have a Shadow IT problem even if you think you don’t.

    I worked in HP IT when Mark Hurd (now at Oracle) was CEO of Hewlett-Packard and Randy Mott (now at General Motors) was CIO. The official company line was that Shadow IT was punishable by termination. It happened anyway. Back then, things like rogue WiFi devices or SaaS accounts at destinations like Salesforce.com or WordPress.com were the big culprits. If the threat of losing your job isn’t enough to keep it at bay, what is?

    And, while we’re at it, what’s the antidote to Shadow IT? It’s something called IT-as-a-Service, but before getting into how this solves the problem of Shadow IT, it is important to understand why it exists in the first place.

    Speed Kills, and It Created Shadow IT

    If you built a time machine and went back to 2006, before the Amazon Web Services beta started, you’d find a very different business environment than what we have today. Company functions like sales, product development and marketing were still responsible for bringing in company revenue. Other functions, like HR, legal, finance and IT were necessary evils that kept the business running but didn’t contribute to the revenue stream. In “cost centers” like this, the only way to optimize contributions to the company bottom line was to run them on as little budget as possible.

    For IT, the biggest contributor to budget was the capital expense used to populate a data center. It had to be utilized as efficiently as possible, and that often meant ruthless standardization down to the kinds of languages used or even specific relational databases that could be used by application teams. That often forced upon a populace rigorous project selection processes which line of business teams had to go through in order to get new functionality out of IT, since strict budgeting was a requirement to keep costs under control.

    As an example from my HP IT days, every year Hurd’s executive team would estimate the company revenue for the next financial year company-wide, and Mott would be tasked with running HP IT on 1.8 percent of whatever that number was. No more, no less. Business teams had to submit project proposals as much as 18 months in advance of when they would get executed, each having to project an ROI. Projects were ordinally ranked by ROI and funded in that order until that 1.8 percent of company revenue budget was exhausted. There was an exception process, but most projects not meeting that standard didn’t get funded, end of story.

    Fast forward to 2016, and every company is a software company. By that I mean that every line of business in every company on the planet relies on software innovation in some way to gain market share or increase profits. In every competitive environment, where agile software development has proven to be the best way to nurture breakthrough change, an 18-month software cycle means you go out of business.

    So what do line of business teams do? When they can’t get the speed they need to compete in their respective marketplaces out of their IT department, they turn outside IT where they can get all kinds of assets quickly and easily in an environment that isn’t optimized for cost reduction because of its corporate placement as a cost center. This need for speed created Shadow IT, but it is also the fundamental key when solving for it.

    IT-as-a-Service: The Antidote for Shadow IT
    What line of business teams crave, demand even, is simple: self-service, on-demand provisioning of resources. Why should they wait three weeks’ worth of ticket approvals in order to get a new virtual machine provisioned when they can get one in 10 minutes on AWS? They shouldn’t and they don’t.

    So the answer is equally simple: Give them self-service, on-demand provisioning of resources. The hard part is to do so in a way that aligns with IT security and licensing policies, tracks their usage over time, and bills that usage back to them. Do that and you move IT away from being a cost center and instead turn it into an active participant in the revenue streams of the line of business teams.

    Fortunately, three key toolsets let an IT team easily build a structure that enables exactly that. Infrastructure-as-a-Service (IaaS) offerings like AWS and Microsoft Azure on the public cloud side or OpenStack and vCenter on the private cloud, make it easy to provision virtual machines in minutes. Cloud Management Platforms (CMP) then provide a mechanism to create application templates on top of multiple IaaS offerings.

    Within those application templates, IT can encode things like monitoring, security, and licensing policies to insure that all applications adhere to the strict standards that make for an efficiently run IT deployment in an automated way, regardless of which back end IaaS is used. Those application templates can then be published upstream into IT Service Management (ITSM) tools that provide a shopping cart-like experience for line of business constituents, enforcing rules regarding who is allowed to deploy what applications and where.

    With a solution like this in place, line of business users can browse a catalog of applications and choose what IaaS they get deployed on. The ITSM sends these requests to the CMP, which then automates the application deployments on the IaaS of choice. The CMP monitors the usage of resources on the IaaS and provides usage data back to the IT staff, which can then send that back to the line of business teams for chargebacks.

    When put together in this way, IT gets the control it needs with the ability to dictate the content of the application components and how they behave runtime. The line of business teams get the self-service, on-demand provisioning that is so critical to their success. And, perhaps most importantly, IT no longer becomes a cost center but an innovation enabler that can charge back precise usage to its constituents and participate in revenue success instead of being forced to drive down costs.

    This piece originally appeared on Computer Technology Review.

  5. Time flies when you are having fun – New CloudCenter Release

    Leave a Comment

    Time flies when you’re having fun and building great products! Those who have been following CloudCenter (formerly CliQr) know that it’s been about 6 months since we were acquired by Cisco. During that time, we’ve have been extremely busy. Not only was there a lot of normal day to day activities needed to integrate into Cisco’s processes and systems, but the team was also working to crank out another great feature-filled release. I happen to be especially proud of this release since I was it’s my first release in the product manager role for CloudCenter.

    Blog image CC

    Image: CloudCenter combining both cloud and data center in one platform

    With my new role comes some great perks like getting to play with the engineering builds right when a new feature is completed. I’m proud to report that not only is CloudCenter 4.6 now generally available, but it’s a great, big, first release under the Cisco banner. This release delivers an even deeper integration with Cisco ACI, a more simplified networking model across clouds, and an easier deployment experience.

    Deeper ACI integration

    CloudCenter first introduced Cisco ACI integration about a year and a half ago—right before CiscoLive 2015 in San Diego. Naturally, after the acquisition in April, one of the first things we set out to do was to deepen that ACI integration and provide more value to our customers. The 4.6 release’s vision centered around generally increasing networking flexibility. But also giving users the option to use either existing Cisco ACI objects (endpoint groups, bridge domains, and virtual machine managers) or dynamically create new ones.

    The net/net of these new and enhanced ACI features is that CloudCenter with Cisco ACI blows away any other solution to give a seamless experience, no matter if you’re using vSphere, OpenStack, Azure Pack, or any other on-premise IaaS cloud API. Network administrators gain flexibility in configuration, automation during deployment, and control of what end users are able to do via self-service on demand offerings —all without ANY coding to the ACI API. On the flip side, end users get a more consistent and expedited deployment of their application profile from an easy to use, self-service experience.

    Simplified Networking

    Since the acquisition, people keep asking us, “are you going to stay cloud-agnostic?” Fortunately, the answer is “Yes” and there is no plan of that changing.  We continue to refine the list of clouds, versions, and regions we support out of the box. And we continue to add enhancements that support a multi-cloud world. The new “Simplified Networking” configuration works by letting an administrator abstract cloud networks and assign a label based on the network’s technical features.

    As an end user, all you have to do is provide your business requirement for the application you’re deploying. CloudCenter then maps all the technical stuff behind the scenes. Need a “Secure” network in Metapod? CloudCenter will map the application in the background to “Network X”. Instead, if the application is landing on AWS, Azure, vCenter, or any of the other clouds we support, the equivalent of the “Secure” network might be “Network Y”.

    By abstracting each cloud’s networks into a business requirement defined label, it makes end users’ life SO MUCH EASIER. Gone are the days when they have to know about the underlying cloud’s network capabilities. At the same time, administrators get more control and guarantee that applications are being deployed appropriately through policy.

    Deployment Experience

    For those old school CliQr users and admins, you’ll notice some slick new user interfaces in this release. Sticking with our mantra of “make life easy for users and admins”, we added the ability for admins to pre-set and hide certain fields from users on the deployment form, let application tiers be deployed across availability zones within a cloud region, and streamlined the deployment screen flow.

    deployment_environment

    Image: New deployment environment configuration screen

    Above you can see the new deployment environment configuration screen. Note the visible and non-visible settings for each field. If I’m an admin, I love this feature because it means I can lock down and hide fields that my users don’t need to worry about. Less room for error, fewer questions from users, and more smooth sailing!

    There’s a ton more that made it into the CloudCenter 4.6 release and you can find it all in the release notes. In the next 6 months, you can be sure to expect more announcements of our progress, both in feature releases and as we make waves as a new product within Cisco!

    This blog originally appeared on Cisco Blogs.

     

CliQr Technologies, CliQr CloudCenter and CloudBlades are trademarks of CliQr Technologies, Inc. All other registered or unregistered trademarks are the sole property of their respective owners.