An OSI Model for CloudLeave a Comment
In 1984, after years of having separate thoughts on networking standards, the International Organization for Standardization (ISO) and the International Telegraph and Telephone Consultative Committee (CCITT) jointly published the Open Systems Interconnection Reference Model, more commonly known as the OSI model. In the more than three decades that have passed since its inception, the OSI model has given millions of technologists a frame of reference to work from when discussing networking, which has worked out pretty well for Cisco.
Cloud technologies have progressed in recent years that a similar model is now suitable as different audiences have very different interests in the components that make up a cloud stack and understanding the boundaries of those components with common terminology can go a long way towards more efficient conversations.
Layer 1: Infrastructure
Analogous to the Physical layer in the OSI model, Layer 1 here refers to the Infrastructure that sits in a data center to provide the foundation for the remainder of the stack. Corporate data centers and colocation providers have been running this Infrastructure layer for years and are experts at “racking and stacking” pieces of hardware within this layer for maximum efficiency of physical space, heating/cooling, power, and networking to the outside world.
Layer 2: Hypervisor
Commonly installed on top of that Infrastructure layer is some sort of virtualization, commonly provided by a Hypervisor. This enables systems administrators to chunk up use of the physical assets into Virtual Machines (VMs) that can be bin packed onto physical machines for greater efficiency. Prior to the advent of the Hypervisor layer, components higher up the stack had to wait weeks to months for new Infrastructure to become available, but with the virtualization provided at this layer, virtualized assets become available in minutes.
Layer 3: Software-Defined Data Center (SDDC)
Resource pooling, usage tracking, and governance on top of the Hypervisor layer give rise to the Software-Defined Data Center (SDDC). The notion of “infrastructure as code” becomes possible at this layer through the use of REST APIs. Users at this layer are typically agnostic to Infrastructure and Hypervisor specifics below them and have grow accustomed to thinking of compute, network, and storage resources as simply being available whenever they want.
Layer 4: Image
Here, a bias towards compute resources (as opposed to network or storage) becomes apparent as Image connotes use of particular operating systems and other pre-installed software components. Format can be an issue here as not all SDDCs support the same types of Images (.OVA vs .AMI, etc.), but most operating systems can be baked into different kinds of Images to run on each popular SDDC. Developers will sometimes get involved at this layer, but not nearly as much as the two layers yet to come.
Layer 5: Services
Application architectures are typically built on top of a set of common middleware components like data bases, load balancers, web servers, message queues, email services, other notification methods, etc. This Service layer is where those are defined, on top of particular Images from the layer below. Sometimes these Services manifest themselves as open source installed on a VM or container, such as MySQL to give a database example. Other times the SDDC may offer an API for accessing components from a pool of Services such as AWS RDS, but underneath that API those components are still built upon an Image and the other layers that precede it.
Layer 6: Applications
The final layer is where end users interact with the stack through deployed Applications that are comprised of custom code that makes use of various Services defined below it.
Whether in a technical conversation or a sales engagement, understanding what layer in this stack a specific person has expertise is important. Someone who implemented a Hypervisor before the SDDC layer became widely available, for example, has a very different view of the world than someone who has never known a world where the SDDC did not exist. Experts at each layer in this stack have bias and often lack of understanding for those working at other layers in the stack. Admitting that and having a framework for all to understand how their part of the world makes up the whole leads to better conversations because everyone understands everyone else’s motivations and places of intersection far better.
This blog originally appeared on Cisco Blogs.