A little while ago I put together a short presentation intended to provide a high-level overview of the wild and wacky world of DevOps, Microservices and Containers. I present this deck both internally and externally to give folks an overview of what is happening in IT today.
For your reference, I have added the speaker notes after the deck. I’m sure everyone has a different take on the concepts and explanations here.
Digital pioneers have reset customer expectations and disrupted industries resulting in the need for organizations to digitally transform in order to be competitive and ultimately survive (witness Kodak, Borders, Blockbuster, the taxi industry etc). Additionally there is no time to waste, 5 years after the financial crisis companies who have been in cost cutting mode are all waking up at the same time realizing that they have a lack luster product portfolio and need to innovate.
3) Digital Business = Software (and it has a shelf life)
The key enabler for digital businesses is software and that software has a shelf-life. To be competitive that software needs to reach customers as soon as possible. To help drive this speed and customer focus, The Agile manifesto of 2001 was created. The manifesto was a reaction to the long development cycles driven by the “waterfall” method of software development. Agile turned its focus to the customer and quick iterative turns of development.
4) But that’s only “half” of the equation
While agile has sped up software develop and has made it more responsive to customer needs, unless its paired with a greater cooperation with operations, the overall delivery of software to customers remains the same.
In the past, Developers have kept their distance from operations. It is not surprising that these groups have stood apart in light of how vastly different their goals and objectives have been.
Developers are goaled to drive innovation and reinvention in order to constantly improve on user experience and deliver new features to stay one step ahead of the competition.
Operations on the other hand is focused on providing rock solid stability, never letting the site go down, while at the same time being able to scale at a moment’s notice.
5) Dev + Ops: A Methodology
And this is where DevOps comes in. DevOps is a methodology intended to get developers and operations working together to decrease friction and increase velocity. You want to be able to get your “product” to customers as quickly as you can, and shorten this time frame as much as possible, you also want to be able to continuously improve your product via feedback.
The gap between developers and operations is often referred to as “the wall of confusion” where code that often isn’t designed for production is lobbed over the wall. Besides silos, the tools on each side do not fit together and there isn’t a common “tool chain.” When the site goes down finger pointing results and ops accuses devs of writing bad code and devs accuse ops of not implementing it correctly. This friction is obviously not productive in a world where “slow is the new down”
By tearing down the wall, the former delineation of responsibilities blurs:
Developers are asked to put “skin in the game” and for example carry a pager to be notified when an application goes down.
Conversely operations will need to learn some basic coding.
In this new world order, developers and ops folks who understand and can work with “the other side” are in high demand.
6) DevOps What its all about
Double clicking on DevOps, here is how it flows from Tenets to Requirements and then Benefits. I should say that there are a lot of different interpretations of which components make up the key characteristics of DevOps but in the true spirit of the methodology, you need to move forward with “good enough. ” (“Always ready, never done”) One factor that is widely agreed upon is that culture is the most important characteristic of DevOps. Without it, you can have all the great processes and tools you want but they will languish. All of this underpinned by the foundation of cloud, open source software (which the majority of the tools and platforms are composed of) as well as microservices – which I will expand on in a second.
7 & 8) Tool chain
Now while I said tools are not as important as culture, the concept of a tool chain provides a good illustration of the connected nature of DevOps. DevOps demands a linked tool chain of technologies to facilitate collaborative change. Interchangeability is key to the success of the DevOps toolchain (via loosely coupled via APIs). Open Source tool adoption and appetite remain strong; however, large-enterprise clients prefer commercially supported Open Source distributions. You will see tool chains depicted many different ways with different players and buckets but this example gives a decent overview of the high-level linkage of processes/components. There are many different tools out in the market that fit into these buckets but I have picked just a couple for each to act as illustrations.
It all starts with new code
Continuous integration(CI) is the practice in software engineering of merging all developer working copies to a shared mainline several times a day. Changes are immediately tested and reported on when they are added to the larger code base.
Version Control: These changes to the code are tracked in a central repository –“one source of truth”
Code deployment: installs the code across 100s/1000s of servers
Measurement and monitoring: continuously measures and monitors the environment to identify bottle necks. This information is then fed back at the front of the process to drive improvements. This data is then fed back to the front of the chain to drive improvements
Across this chain the code travels in the form of Microservices that are conveyed in containers.
9) Microservices: essential to iterate, scale and speed
Lets take a closer look at microservices which although they support DevOps, have developed independently over the last few years as a grassroots, developer driven effort. Microservices is the concept of the decomposing software applications into loosely coupled and recombinable bite-sized processes Eg breaking a “store” component into: order processing, fulfillment, and tracking services . This decomposition greatly increases the ability to iterate, scale and it increases speed, thereby enabling continuous delivery. Microservices and cloud go hand-in-hand, where autoscaling can help ensure no service becomes a bottleneck by adding horse power where needed. Docker and microservices are a perfect fit.
10) Enter the modern container:
As I mentioned previously, containers fit well as the conduit to deliver microservices. While containers have been around for a decade in the form of Solaris Zones, BSD jails as well as at Google where they have used them to run their infrastructure (creating and blowing away 2 billion containers a week). It has only been in the last year or two that they have come to the fore thanks to Docker who evolved Linux containers in the context of modern applications and made containers easy to use for the general dev/ops person (Docker expertise is currently the second most sought after skill today in the tech world).
Containers serve perfectly as vehicles to convey microservices and applications across the tool chain from development through testing, staging and production, much the same way goods in shipping containers can be packaged and sent on a truck from the warehouse the loaded on a ship and then put on a truck waiting on the other side. Additionally they can be used on public and private clouds as well as bare metal servers.
11) Containers vs VMs.
Architecturally VMs and containers differ in that VMs sit on top of hypervisor and each VM contains both a guest OS as well as an app. Containers on the other hand package an app or service by itself and it sits directly on top of the OS. Given the maturity of VMs, they are more secure than containers, they also take much longer to spin up. Containers on the other hand don’t currently have the security of a VM but spin up in milliseconds vs seconds or minutes. In order to address security concerns, in most cases today organizations are running containers within virtual machines
As all new technology, containers are still rough around the edges and if you aren’t an early adopter kind of organization, you may want to play with/pilot them but not implement on a large scale just yet.
12) The landscape:
At this point the container landscape is an ever changing field populated by small and large players. This space is dominated by open source offerings.
Container engines: As the center of gravity for of the landscape are the container engines themselves made up by the 800 pound gorilla, Docker as well as Rocket which was created by CoreOS in response to what CoreOS felt was a lack of security in the Docker container. This summer the Open Container Initiative was kicked off to bring the two sides together and create a common spec.
MicroOS’s: Sitting beneath the containers are the micro OS’s, basically the size of 25 pictures on your cell phone (100 MB) or 1/20th the size of a typical OS. What makes these so small is that they have been stripped down to the bare necessities eg no fax sw included. These began with CoreOS and now there are offerings from Red Hat (atomic), Microsoft (nano), VMware (photon) and Rancher etc (others include Intel’s ClearOS and Ubuntu’s Snappy)
Container Orchestration: Just like having VM or server sprawl, you can have container sprawl and need to be able to manage them. The offering that sits at the center is Google’s Kubernetes built on their own container management platform and which can combined with the other orchestration offerings. The others include, Rancher, Docker Swarm, CoreOS, Mesosphere (based off of the Apache Mesos project) and Flocker a container data volume manager
Clouds with Docker Support: Most clouds are now building docker support from OpenStack to Joyent’s Triton, Google’s container engine, EC2 and Microsoft Azure
13) The DevOps equine continuum
Now if we zoom back out and take a look at the implemtation of DevOps it can be illustrated by the analogy of an “Equine continuum.” Here is a model for classifying companies into three buckets illustrating their position on DevOps journey.
In the upper right you have the “Unicorns” (not the billion dollar-valued unicorns of the valley) such as AWS, google, uber etc who have employed devops methodology since their beginnings or soon there after. This tend to be cloud based companies.
Next on the continuum are “Race Horses” often times banks like Goldman Sachs or JP Morgan Chase who are starting to implement DevOps to increase their agility and gain a competitive edge.
In lower left are the “Work horses” who have just started looking into how they can improve their competitiveness via digital transformation and what role DevOps may play.
14) Where do I start
If you fit into the workhorse classification and you’re looking to get started we are not advocating that you dump all your existing infrastructure and start implementing DevOps, for one thing you would have a mutiny on your hands. The best place to focus is on those fast changing applications and services on the front end that are customer facing. You would want to leave stable transaction-oriented systems on the back as they are.
15) What Dell is doing in this space
Offerings
Professional services: Dell’s professional services organization has an array of offerings to enable organizations to implement DevOps practices:
DevOps focussed test Automation, performance testing services
OpenShift: Working with our partner Red Hat, Dell is making the OpenShift Platform as a Service available to our customers.
Dell XPS 13 developer edition: This is an Ubuntu Linux-based developer laptop that allows developers to create applications/microservices within Docker containers on their laptops and then deploy these containers directly to the cloud.
Open Networking OS 10: This switch OS works with Kubernetes which coordinates the hardware pieces. OS 10 programs the hardware as containers come and go.
Projects
Flocker plugin: Code that allows ClusterHQ’s Flocker to integrate with the Dell Storage SC Series has been made available on github. What this does is allow developer and operations teams to use existing storage to create portable container-level storage for Docker. Rather than coming from an internal planning process or committee, the idea for a Flocker plugin came from Dell storage coder Sean McGinnis. Sean was looking for ways to make Dell Storage an infrastructure component in an open source environment.
Containerizing an old-school application: There are also several projects going on within the company to develop a greater understanding of containers and their advantages. About a year ago Senior Linux engineer Jose De la Rosa had heard so much Docker and container-mania that he thought he’d find out what the fuss was all about. Jose started looking around for an app within Dell that he could containerize and came across Dell’s OpenManage Server Administrator (OMSA). In case you’re wondering, OMSA is an in house application used to manage and monitor Dell’s PowerEdge servers. Rather than being a micro-service based application, OMSA is an old school legacy app. Jose succeeded in containerizing the application and learned quite a bit in the process.
CTO Lab: Dell’s CTO team has set up Joyent’s elastic container infrastructure, Triton, in our lab running Docker. The idea is to learn from this platform and then work with the Active Systems Manager team to decompose ASM into microservices and run it on the Triton platform.
Industry Consortia and Internal use of DevOps
Open Container Initiative: Dell is a member of the Open Container Initiative which is hosted by the Linux foundation and is chartered to create common specifications for containers to allow for interoperability and increased security.
Dell IT: Within Dell itself, devops is being used to support Dell.com and internal IT. Dell’s Active System Manager employees the DevOps methodology in its product development process.
A couple weeks ago when Silicon Valley-based Darius Goodall and Cliff Wichmann made the pilgrimage out to Austin I grabbed some time with them to learn about the recently announced OS 10. Darius heads up the DevOps and tech partner ecosystem in Dell’s networking group while Cliff is the software architect for OS 10.
Take a listen as they take us through the new OS and where it’s going.
Some of the ground Darius and Cliff cover
A couple of years ago Dell disaggregated the switch hardware from the software and now we’re disaggregating the SW
Think of the switch itself as a Debian-based server with a bunch of ethernet ports
It will allow you to orchestrate, automate and integrate Linux-based apps into your switching environment
Timeline: Base version coming out in March – a DevOps friendly server environment
Timeline: In June/July the premium applications will be released which will be the switching packages to use on top of the Linux base+ a fancy routing suite (if you want to get going before hand you can use Quagga on top of the base)
CPS: programatic interface we’ve added into the base in order to enable developers
Extra-credit reading
Dell serves up its own disaggregated OS – NetworkWorld
Dell drops next network OS on the waiting world – The Register
Dell’s OS10 aims to open up networks, then whole data centers – PCWorld,
As we’ve talked about before, a few of us in Dell’s CTO group have recently been working with our friends at Joyent. This effort is a part of the consideration of platforms capable of intelligently deploying workloads to all major infrastructure flavors – bare-metal, virtual machine, and container.
Today’s post on this topic comes to us complements of Glen Campbell — no, not that one, this one:
Glen has recently come from the field to join our merry band in the Office of the CTO. He will be a part of the Open Source Cloud team looking at viable upstream OSS technologies across infrastructure, OS, applications, and operations.
Cloud, allows customers to take advantage of the technologies and scale Joyent leverages in their Public Cloud.
On the Triton Elastic Container Infrastructure (which I’ll call “Triton” from now on) bare-metal workloads are intelligently sequestered via the use of the “Zones” capabilities of SmartOS. Virtual machines are deployed via the leveraged KVM hypervisor in SmartOS, and Docker containers are deployed via the Docker Remote API Implementation for Triton and the use of the Docker or Docker Compose CLIs.
What’s the Dell/Joyent team doing?
As part of interacting with Triton we are working to deploy a Dell application, our Active System Manager (ASM), as a series of connected containers.
The work with Triton will encompass both Administrative and Operative efforts:
Administrative
Investigate user password-based authentication via LDAP/Active Directory
in conjunction with SSH key-based authentication for CLI work
Track/Monitor Triton logging via Elasticsearch
use Joyent’s pre-packaged build of Elastic’s (http://elastic.co) Elasticsearch
Newer Triton node client to see next-gen of “sdc-X” tools
Docker Compose
build a multi-tier Docker application via Docker Compose, deploy on Triton via its Docker Remote API endpoint
Triton Trident…
deploy a 3-tier application composed of:
Zone-controlled bare-metal tier (db – MySQL)
Docker-controlled container tier (app – Tomcat)
VM-based tier (presentation – nginx)
Dell Active System Manager — a work in progress
aligning with Dell’s internal development and product group to establish a container architecture for the application
Stay tuned
Our test environment has been created and the Triton platform has been deployed. Follow-on blog posts will cover basic architecture of the environment and the work to accomplish the Admin and Ops tasks above. Stay tuned!
Extra-credit reading
Instructions: Installing Triton Elastic Container Infrastructure (updated to reflect learnings from setting up Triton in the Dell CTO lab)
Last week I flew out to sunny California to participate in SCaLE 14x and the UbuCon summit. As the name implies this was the 14th annual SCaLE (Southern California Linux Expo) and, as always, it didn’t disappoint. Within SCaLE was the UbuCon summit which focused on what’s going on within the Ubuntu community and how to better the community.
While there I got to deliver a talk on Project Spuntik The Sputnik story: innovation at a large company, I also got to hang out with some of the key folks within the Ubuntu and Linux communities. One such person is Mark Shuttleworth, Ubuntu and Canonical founder. I grabbed some time with Mark between sessions and got to learn about the upcoming 16.04 LTS release (aka Xenial Xerus) due out on April 21st.
Take a gander:
Some of the ground Mark covers
The big stories for 16.04 LTS
LXD — ultralight VMs that operate like containers and give you the ability to run 100s of VMs on a laptop. Mark’s belief is that this will fundamentally change the way people use their laptops to do distributed development for the cloud.
Snappy — a very tight packaging format, for Ubuntu desktop and server distros. It provides a much better way of sharing packages than PPAs and Snaps provide a cleaner, faster way of creating packages.
Juju and charms
Where do Juju charms and snappy intersect? (hint: They’re orthogonal but work well together, charms can use snaps)
OS and services
The idea is to have the operating system fade into the background so that users can focus instead on services in the cloud eg “give me this service in the cloud” (which juju will allow) or “deliver this set of bits to a whole set of machines ala snappy”
Here is our third and final post walking through the setting up of the Joyent Triton platform in the Dell CTO lab. In the first post, Don Walker of the CTO office gave an overview of what we were doing and why. The second laid out the actual components and configuration of the platform.
Today’s video is a walk-through of the installation process where Don shares his experience in setting up the Triton Platform.
When we pick this series up again it will focus on containerizing Dell’s Active System Manager and then loading it on Triton. Not sure how long this work will take so stay tuned!
Some of the ground Don covers:
Before installing Triton, you need networking set up and working. Don double clicks on the network configuration and what we did to make sure it was working.
Step one in installing Triton, is to create a bootable USB key and install the head node. There is a scripted set up which is dead simple. Lays down SmartOS and Triton services
Compute node install is also scripted which contains a lot of the info you entered during the head node configuration. After this you run acceptance tests
Great support from Joyent with a couple of small issues we had
Unacceptable character in pswd. This info was fed back to the devs and is now fixed.
We forgot to disable the SATA port and kept getting error messages. Once we disabled it, it worked.
Continuing from the previous post, here is a more detailed explanation of the Joyent Triton platform we set up in the CTO lab. Triton is Joyent’s elastic container infrastructure that runs on their cloud, a private cloud or both.
The idea behind setting up this instance is, working with Joyent, to learn about the platform. The next step is to work with the Dell Active System Manager (ASM) team to decompose ASM into microservices and then run it on the Triton platform.
Take a listen as Don walks through the actual layout of the instance.
Some of the ground Don covers
Our minimalist set-up featuring two Dell R730 servers (the schematic only shows one for simplicity. An R730 contains two 520s). Don explains how they are configured and how ZFS affects the set up.
The two Dell Force 10 S6000 switches.
A double-click on the networking set up
The roles the compute and head nodes (the head node acts as the admin into the system).
A while back I tweeted how we had begun setting up a mini-instance of Joyent’s Triton in our Dell CTO lab. Triton is Joyent’s elastic container infrastructure that runs on their cloud, a private cloud or both. This cloud platform includes OS and machine virtualization (e.g. Docker with regards to the former and typical VMs under KVM for the latter).
About a week ago we got the platform set up about and I grabbed sometime with Don Walker of Dell’s enterprise CTO office to tell us about it.
In this first of three videos, Don gives an overview of the work Dell is doing with Joyent. He describes what we’ve set up in the lab and talks about where we hope to take it.
Some of the ground Don covers
Don’s focus on Open Source Cloud eg Open Stack, containers, cloud networking and storage solutions
What the enterprise CTO office does
What we’re doing with Joyent: evaluating Triton and the process of taking existing products and put them into microservices and containers.
Looking at Dell’s ASM (Active System Manager) and what it means to refactor for microservices and containers
Overview of what was set up in the lab: a minimalist 2 node instance consisting of head and compute nodes.
Last, but not least in my KubeCon video-palooza series is an interview with RackN founder, Rob Hirschfeld. Rob talks about their offering Digital Rebar and how it addresses composable operations.
Some of the ground Rob covers
How he’s taking what he did at Dell (Crowbar) and taking it beyond physical provisioning and automation.
V3 is now called DigitalRebar Composable operations – the company name is RackN
Allows you to build up a “ready state” on the infrastructure and then can bring in Ceph or Kubernetes, OpenStack, Mesoshpere…
What companies they’re working with and where they seem themselves going – composable operations and addressing the “fidelity gap” – taking the same work from start to scale and addressing the deployment hassles
Today’s post is the penultimate video in my series of interviews from KubeCon back in November. Below, Aaron Bell, the product manager working on developer facing tools, talks about the Mesos project and Mesosphere — what they do and who’s using it.
Some of the ground Aaron covers
How mesosphere is connected to the Apache Mesos project (powers Siri and is used at Twitter) 50% of the committers
Continuing with my videos from KubeCon, here is a chat ClusterHQ’s founder and CTO, Luke Marsden. Luke explains Flocker, how its being used and talks about the Dell/Flocker driver.
Some of the ground Luke covers
What is Flocker (hint: a way to connect the container universe to the storage universe)
Why you don’t want containers that are “pets rather than cattle”
What types of customers are using Flocker and how Swisscom uses it along with Cloud Foundry
With today’s post we are five interviews into the videos I took at Kubecon with three remaining.
Today’s interviewee is Rob Szumski, one of the early employees of CoreOS. Rob explains CoreOS, Tectonic and where CoreOS is going from here.
Some of the ground Rob covers
CoreOS began as an operating system for large scale clusters and how Docker came around at just the right time and worked with CoreOS
CoreOS as the original micro OS
The components of Tectonic – How you should deploy your containers, on top of: kubernetes, flannel, coreOS, etc; it also comes with support and architectural help
Whats on tap for CoreOS and Tectonic – tools and more
While I was in San Francisco back in November, I stopped by Joyent’s headquarters. The main purpose was to talk about the Docker/Triton platform we are setting up in the CTO lab.
While I was there I chatted with Joyent’s Casey Bisson, director of product management. Casey took me through a couple of white board sessions around containers and VMs. This first session talks about how containers and VMs work together, how they’re different and where Joyent’s elastic container infrastructure, Triton, fits.
Some of the ground Casey covers
Linux allows you to build containers on your laptop and push them, as is, to the cloud. For other OS’s you need to use VMs
Containers in the cloud within VMs and the affect on efficiency
Running containers on bare metal, security concerns and how Joyent addresses these concerns
How Triton virtualizes the network into the container
Extra-credit reading
KubeCon: Learning about Joyent and Triton, the elastic container infrastructure – Barton’s blog
Here’s another interview from KubeCon back in November. This one’s a twofer. Joyent’s CEO and CTO, Scott Hammond and Bryan Cantrill respectively, talk about taking their learnings from Solaris zones and applying them to the world of modern apps and containers.
Some of the ground Scott and Bryan cover
Joyent, a software company focused on delivering a container native software infrastructure platform
They had been doing containers for 6 years and when Docker came along they focused on that
How Solaris zones came about, how Joyent picked it up and ran with it, and how it acted as a foundation for today’s containerized world – How they were in the right place at the wrong time
Whats in store for Joyent going forward – supporting the movement to modern app dev and the intersection of containers – taking this new tech and productizing and simplifying them to allow enterprises to roll them out
Im just getting around to publishing my interviews from KubeCon back in November
Today’s interview features Red Hat’s Grant Shipley, director of developer advocacy for “container application platform” OpenShift. Grant talks about the launching of OpenShift v3.1 and what’s ahead.
Some of the ground Grant covers:
Announcing 3.1, the latest upstream version of Red Hat’s open source project OpenShift Origin
Enterprise comes with support for docker/kubernetes in production
Moving away from “PaaS” to “container application platform”
All functionality exposed via apis; cli and web console tools for ops; ops has full control but devs can self service
How it works with Ansible (or Puppet and Chef)
Whats next going forward: continuing to focus on dev experience whether they’re using node.js or Java
Although it may be a bit self-indulgent, I’d like to kick off 2016 with a look back on my blog stats for 2015.
The WordPress.com stats helper monkeys prepared a 2015 annual report for this blog.
Here’s an excerpt:
The Louvre Museum has 8.5 million visitors per year. This blog was viewed about 250,000 times in 2015. If it were an exhibit at the Louvre Museum, it would take about 11 days for that many people to see it.
A couple of weeks ago I attended KubeCon in San Francisco. There were a series of talks as well as a bunch of vendors who were there in mini-booths chatting with folks and showing off what they do. As always, the part I get the most out of at conferences like this is the “hallway track” where I get to chat one-on-one with various folks.
One such folk was Sarah Novotny, who recently joined Google as the first Kubernetes community lead. Check out the video below where Sarah talks about her goals for the community, how it will fit with the Cloud Native Computing Foundation and how she hopes to extend this beyond a Google only effort.
About a year ago Senior Linux engineer Jose De la Rosa had heard so much Docker and container-mania that he thought he’d find out what the fuss was all about. Jose started looking around for an app within Dell that he could containerize and came across Dell’s OpenManage Server Administrator (OMSA). In case you’re wondering, OMSA is an in house application used to manage and monitor Dell’s PowerEdge servers. Rather than being a micro-service based application, OMSA is an old school legacy app.
To hear how Jose tackled the task, why, and what he learned, check out the following video (also take a look at the deck below that he presented at the Austin Docker meet up).
Here’s the deck Jose presented at the Austin Docker Meetup back in September.
For more info about what Jose and the Dell Linux engineering team are doing in this space, check out linux.dell.com/docker
Last week I headed out to San Francisco to attend Kubecon and soak in the Kubernetes and devops-ecosystem goodness. As the event landing page explained:
KubeCon 2015 is the first inaugural community Kubernetes conference, fully dedicated to education and community engagement focused on early Kubernetes, production users and contributors.
As I normally do at events like this I prowled the halls to look for folks doing cool stuff to interview with my trusty Flipcam.
One of the people I chatted with was Kenneth Jung, developer lead on the Photon Controller team at VMware. The Photon Controller, in short, is a cloud scale IO solution for managing ESX. (One of the things Kenneth alludes to is the open sourcing of the Controller which ended up happening yesterday.)
Some of the ground Kenneth covers
What is the photon controller – cloud scale io solution for managing ESX
How the cluster manager makes deployment and management of large of container frameworks like Kubernetes, easy
How VMware looks at VMs vs containers
The Photon microvisor + Photon OS used in Photon Controller
They will have a cloud foundry release early next year
Extra-credit reading
VMware extends container campaign with open source Photon Controller – InfoWorld
VMware’s Photon Platform and How it Treats Containers – The NewStack
Today ClusterHQ and Dell announced the availability on GitHub of code that allows ClusterHQ’s Flocker to integrate with the Dell Storage SC Series. What this does is allow developer and operations teams to use existing storage to create portable container-level storage for Docker.
Before we dive into the back story on how the plugin came to be, take a listen to ClusterHQ’s founder and CTO Luke Marsden. Luke explains Flocker, how its being used and talks about the Dell/Flocker driver.
How the plugin came about
Rather than coming from an internal planning process or committee, the idea for a Flocker plugin came from Dell storage coder Sean McGinnis. Sean was looking for ways to make Dell Storage an infrastructure component in an open source environment. Some time back he noticed that Flocker seemed to be a good integration point to allow the use of Dell Storage for users looking to move to containerized solutions.
Sean saw a lot of overlap with what his team was already doing with their OpenStack Cinder driver (both written in Python, with some common storage management concepts). He realized that that they could reuse the majority of this code for a Flocker driver by providing the Flocker driver interface to translate Flocker calls into our storage API. Along with Ian Anderson (another Dell Storage engineer) the pair engaged ClusterHQ to explore requirements for brining Storage Center support to Flocker.
Sean and Ian then worked internally to implement our Flocker driver, open source it and put the code up on GitHub.
The code, storage and beyond
-> You can check out the code and play with it for yourself here on GitHub.
Going forward the team is looking to expand Dell Storage’s open source offerings hosted on GitHub. They see a lot of potential for containers and will continue working in this space to make sure enterprise customers can leverage their storage arrays to support these developing environments.
Beyond storage, Dell is looking to start open sourcing more code and putting it up on GitHub. Don’t expect a deluge right off the bat but hopefully over time you will start seeing more and more.
Extra-credit reading
The Container Wars Are Under Way As Dell, And Others, Add Flocker Support – Tom’s IT Pro
Dell works with Cluster HQ to allow Docker containers to leverage Dell Storage – Dell4Enterprise
As Dell as a company continues to evolve we have started implementing DevOps practices in our software development. Dell IT is employing DevOps as are some of our product development teams.
In the following video, systems engineer Chris Gully explains how Dell’s Active System Manager has incorporated DevOps into its development. (the audio could be a bit better so you’ll have to crank it up a bit for Chris 🙂