Earlier this summer I was out in Seattle for DockerCon. Among the people I interviewed was Taylor Brown of Microsoft. While Microsoft may not be the first company you think of when talking containers, they actually have a bunch going on. Taylor in fact leads the team focusing on the server container technology coming out of Windows e.g. Hyper-V containers and Windows server containers.
Taylor and I sat down and he took me through what his team has been up to and their goals for the future.
Take a listen
Some of the ground Taylor covers
Taylor and his team support customers running Windows on Azure, Amazon, Google and others.
The team has been working closely with Docker and the community contributing code to allow Docker to work with Windows
Windows Server 2016 will come with full container support
Following on Azure’s container services with Linux, they’re adding Windows support
Goals for the future: performance and scaling are a big focus; security around authentication and authorization; also thinking about Linux containers on Windows
Extra-credit reading
Docker’s Close Integration with Windows Server – Redmond magazine
Microsoft PowerShell Goes Open Source, Arrives On Linux, Mac – InformationWeek
VIDEO: Ubuntu comes to the Windows desktop — OpenStack summit – Barton’s Blog
Just when the tech world was starting to get their heads around containers, along come unikernels. Like containers, unikernels have been around in some form or another for quite awhile. Their resurgence has to do in large part to their container-like functionality. In a nutshell, unikernels combine an uber-stripped down version of an OS packaged with an individual app or service, providing a unit even smaller and more agile than a container.
Back in January Docker, seeing the strategic importance (threat?) of unikernels, acquired Unikernel Systems. Unikernel Systems, based in Cambridge in the UK, is made up of former developers of the Xen hypervisor project.
At OSCON I caught up with Richard Mortier formally of Unikernel systems and now a Docker employee, to learn about the wild and wacky world of unikernels.
Some of the ground Richard covers
What is a unikernel?
How is Docker positioning unikernels within its portfolio?
How unikernels augment, rather than replace containers
Unikernels: love em? hate em?
Unikernels are not without their vehement detractors. Roman Shaposhnik, in his post “In defense of unikernels” does a pretty good job of laying out the good and the bad. Roman’s conclusion:
….unikernels are not a panacea. Nothing is. But they are a very useful building block that doesn’t need any additional FUD. If you really want to fight something that is way overhyped you know where to find linux containers.
Extra-credit reading
Introducing Unik: Build and Run Unikernels with Ease – Linux.com
Docker bags unikernel gurus – now you can be just like Linus Torvalds – The Register
‘Unikernels will send us back to the DOS era’ – DTrace guru Bryan Cantrill speaks out – The Register
Docker kicks off the unikernel revolution – InfoWorld
Last week I attend DockerCon 2016 in Seattle. Besides spending time working the Dell booth, I grabbed a bunch of folks and did some short, guerrilla-style interviews. One of my victims was Kit Colbert who heads up VMware’s cloud native applications group.
With the onslaught of container-mania VMware, the 800-pound-VM gorilla, has had to take a hard look at the changing landscape and decide if/how they wanted to join the fray.
VMware’s response
VMware’s decision was to sally forth with not one but two entrants into the land of containers: Photon Platform and vSphere Integrated Containers. In the video below Kit gives an overview of Photon Platform and explains how it relates to vSphere Integrated Containers.
In the second video the product manager for VMware’s vSphere Integrated Containers, Karthik Narayan, provides a double-click on this vSphere-based offering.
Some of the ground Kit covers
Photon is targeted at those customers who are taking a greenfield approach and are looking for a platform optimized for cloud native applications. It GA’d this month and came with a version of Pivotal Cloud Foundry
Photon’s components: 1) the Photon controller which acts as a manger of all the hosts, 2) PhotonOS which is a container-optimized Linux distro and 3) Photon machine which is ESX and, going forward, will be optimized for cloud native applicaitons.
Native Hybrid Cloud: a tightly integrated stack from EMC composed of: Photon platform + EMC’s VxRack + Pivotal Cloud Foundry
Some of the ground Karthik covers
vSphere Integrated Containers are an extension of vSphere which natively integrates with Docker. It is targeted at enterprises who want to run containers alongside existing apps and workloads.
It is composed of vSphere + ESX hypervisor + vCenter +VSan + NSX etc.
It allows enterprises to take their existing environments, add vSphere Integrated Containers and in 20 minutes have an environment that will allow their developers to work with Docker while at the same time allowing Ops to use an environment they’re familiar with to manage these new workloads.
Extra-credit reading
VMware Hires Longtime Intel Linux Exec As Its First-Ever Chief Open Source Officer – CRN
Compare and Contrast: Photon Controller vs VIC (vSphere Integrated Containers) –CormacHogan.com
Yesterday here at the OpenStack summit here in Austin I caught a few of the sessions in the track that Canonical was hosting. One of the sessions dealt with Canonical’s LXD and where it fits into the whole virtualization/container space.
The talk was given by Dustin Kirkland and after he had finished, I grabbed him to explain the basics of LXD and the landscape it fits within.
Have a listen
Some of the ground Dustin covers:
What is LXD and how is it different from virtual machines and containers
How LXD acts like a hypervisor but is fundamentally a container
Application containers vs Machine containers
Applications containers like Docker host a single proccess on a filesystem
Machine containers from LXD boot a full OS on their filesystems
As we’ve talked about before, a few of us in Dell’s CTO group have recently been working with our friends at Joyent. This effort is a part of the consideration of platforms capable of intelligently deploying workloads to all major infrastructure flavors – bare-metal, virtual machine, and container.
Today’s post on this topic comes to us complements of Glen Campbell — no, not that one, this one:
Glen has recently come from the field to join our merry band in the Office of the CTO. He will be a part of the Open Source Cloud team looking at viable upstream OSS technologies across infrastructure, OS, applications, and operations.
Cloud, allows customers to take advantage of the technologies and scale Joyent leverages in their Public Cloud.
On the Triton Elastic Container Infrastructure (which I’ll call “Triton” from now on) bare-metal workloads are intelligently sequestered via the use of the “Zones” capabilities of SmartOS. Virtual machines are deployed via the leveraged KVM hypervisor in SmartOS, and Docker containers are deployed via the Docker Remote API Implementation for Triton and the use of the Docker or Docker Compose CLIs.
What’s the Dell/Joyent team doing?
As part of interacting with Triton we are working to deploy a Dell application, our Active System Manager (ASM), as a series of connected containers.
The work with Triton will encompass both Administrative and Operative efforts:
Administrative
Investigate user password-based authentication via LDAP/Active Directory
in conjunction with SSH key-based authentication for CLI work
Track/Monitor Triton logging via Elasticsearch
use Joyent’s pre-packaged build of Elastic’s (http://elastic.co) Elasticsearch
Newer Triton node client to see next-gen of “sdc-X” tools
Docker Compose
build a multi-tier Docker application via Docker Compose, deploy on Triton via its Docker Remote API endpoint
Triton Trident…
deploy a 3-tier application composed of:
Zone-controlled bare-metal tier (db – MySQL)
Docker-controlled container tier (app – Tomcat)
VM-based tier (presentation – nginx)
Dell Active System Manager — a work in progress
aligning with Dell’s internal development and product group to establish a container architecture for the application
Stay tuned
Our test environment has been created and the Triton platform has been deployed. Follow-on blog posts will cover basic architecture of the environment and the work to accomplish the Admin and Ops tasks above. Stay tuned!
Extra-credit reading
Instructions: Installing Triton Elastic Container Infrastructure (updated to reflect learnings from setting up Triton in the Dell CTO lab)
A while back I tweeted how we had begun setting up a mini-instance of Joyent’s Triton in our Dell CTO lab. Triton is Joyent’s elastic container infrastructure that runs on their cloud, a private cloud or both. This cloud platform includes OS and machine virtualization (e.g. Docker with regards to the former and typical VMs under KVM for the latter).
About a week ago we got the platform set up about and I grabbed sometime with Don Walker of Dell’s enterprise CTO office to tell us about it.
In this first of three videos, Don gives an overview of the work Dell is doing with Joyent. He describes what we’ve set up in the lab and talks about where we hope to take it.
Some of the ground Don covers
Don’s focus on Open Source Cloud eg Open Stack, containers, cloud networking and storage solutions
What the enterprise CTO office does
What we’re doing with Joyent: evaluating Triton and the process of taking existing products and put them into microservices and containers.
Looking at Dell’s ASM (Active System Manager) and what it means to refactor for microservices and containers
Overview of what was set up in the lab: a minimalist 2 node instance consisting of head and compute nodes.
With today’s post we are five interviews into the videos I took at Kubecon with three remaining.
Today’s interviewee is Rob Szumski, one of the early employees of CoreOS. Rob explains CoreOS, Tectonic and where CoreOS is going from here.
Some of the ground Rob covers
CoreOS began as an operating system for large scale clusters and how Docker came around at just the right time and worked with CoreOS
CoreOS as the original micro OS
The components of Tectonic – How you should deploy your containers, on top of: kubernetes, flannel, coreOS, etc; it also comes with support and architectural help
Whats on tap for CoreOS and Tectonic – tools and more
While I was in San Francisco back in November, I stopped by Joyent’s headquarters. The main purpose was to talk about the Docker/Triton platform we are setting up in the CTO lab.
While I was there I chatted with Joyent’s Casey Bisson, director of product management. Casey took me through a couple of white board sessions around containers and VMs. This first session talks about how containers and VMs work together, how they’re different and where Joyent’s elastic container infrastructure, Triton, fits.
Some of the ground Casey covers
Linux allows you to build containers on your laptop and push them, as is, to the cloud. For other OS’s you need to use VMs
Containers in the cloud within VMs and the affect on efficiency
Running containers on bare metal, security concerns and how Joyent addresses these concerns
How Triton virtualizes the network into the container
Extra-credit reading
KubeCon: Learning about Joyent and Triton, the elastic container infrastructure – Barton’s blog
Here’s another interview from KubeCon back in November. This one’s a twofer. Joyent’s CEO and CTO, Scott Hammond and Bryan Cantrill respectively, talk about taking their learnings from Solaris zones and applying them to the world of modern apps and containers.
Some of the ground Scott and Bryan cover
Joyent, a software company focused on delivering a container native software infrastructure platform
They had been doing containers for 6 years and when Docker came along they focused on that
How Solaris zones came about, how Joyent picked it up and ran with it, and how it acted as a foundation for today’s containerized world – How they were in the right place at the wrong time
Whats in store for Joyent going forward – supporting the movement to modern app dev and the intersection of containers – taking this new tech and productizing and simplifying them to allow enterprises to roll them out
Im just getting around to publishing my interviews from KubeCon back in November
Today’s interview features Red Hat’s Grant Shipley, director of developer advocacy for “container application platform” OpenShift. Grant talks about the launching of OpenShift v3.1 and what’s ahead.
Some of the ground Grant covers:
Announcing 3.1, the latest upstream version of Red Hat’s open source project OpenShift Origin
Enterprise comes with support for docker/kubernetes in production
Moving away from “PaaS” to “container application platform”
All functionality exposed via apis; cli and web console tools for ops; ops has full control but devs can self service
How it works with Ansible (or Puppet and Chef)
Whats next going forward: continuing to focus on dev experience whether they’re using node.js or Java
About a year ago Senior Linux engineer Jose De la Rosa had heard so much Docker and container-mania that he thought he’d find out what the fuss was all about. Jose started looking around for an app within Dell that he could containerize and came across Dell’s OpenManage Server Administrator (OMSA). In case you’re wondering, OMSA is an in house application used to manage and monitor Dell’s PowerEdge servers. Rather than being a micro-service based application, OMSA is an old school legacy app.
To hear how Jose tackled the task, why, and what he learned, check out the following video (also take a look at the deck below that he presented at the Austin Docker meet up).
Here’s the deck Jose presented at the Austin Docker Meetup back in September.
For more info about what Jose and the Dell Linux engineering team are doing in this space, check out linux.dell.com/docker
Last week I headed out to San Francisco to attend Kubecon and soak in the Kubernetes and devops-ecosystem goodness. As the event landing page explained:
KubeCon 2015 is the first inaugural community Kubernetes conference, fully dedicated to education and community engagement focused on early Kubernetes, production users and contributors.
As I normally do at events like this I prowled the halls to look for folks doing cool stuff to interview with my trusty Flipcam.
One of the people I chatted with was Kenneth Jung, developer lead on the Photon Controller team at VMware. The Photon Controller, in short, is a cloud scale IO solution for managing ESX. (One of the things Kenneth alludes to is the open sourcing of the Controller which ended up happening yesterday.)
Some of the ground Kenneth covers
What is the photon controller – cloud scale io solution for managing ESX
How the cluster manager makes deployment and management of large of container frameworks like Kubernetes, easy
How VMware looks at VMs vs containers
The Photon microvisor + Photon OS used in Photon Controller
They will have a cloud foundry release early next year
Extra-credit reading
VMware extends container campaign with open source Photon Controller – InfoWorld
VMware’s Photon Platform and How it Treats Containers – The NewStack
Today ClusterHQ and Dell announced the availability on GitHub of code that allows ClusterHQ’s Flocker to integrate with the Dell Storage SC Series. What this does is allow developer and operations teams to use existing storage to create portable container-level storage for Docker.
Before we dive into the back story on how the plugin came to be, take a listen to ClusterHQ’s founder and CTO Luke Marsden. Luke explains Flocker, how its being used and talks about the Dell/Flocker driver.
How the plugin came about
Rather than coming from an internal planning process or committee, the idea for a Flocker plugin came from Dell storage coder Sean McGinnis. Sean was looking for ways to make Dell Storage an infrastructure component in an open source environment. Some time back he noticed that Flocker seemed to be a good integration point to allow the use of Dell Storage for users looking to move to containerized solutions.
Sean saw a lot of overlap with what his team was already doing with their OpenStack Cinder driver (both written in Python, with some common storage management concepts). He realized that that they could reuse the majority of this code for a Flocker driver by providing the Flocker driver interface to translate Flocker calls into our storage API. Along with Ian Anderson (another Dell Storage engineer) the pair engaged ClusterHQ to explore requirements for brining Storage Center support to Flocker.
Sean and Ian then worked internally to implement our Flocker driver, open source it and put the code up on GitHub.
The code, storage and beyond
-> You can check out the code and play with it for yourself here on GitHub.
Going forward the team is looking to expand Dell Storage’s open source offerings hosted on GitHub. They see a lot of potential for containers and will continue working in this space to make sure enterprise customers can leverage their storage arrays to support these developing environments.
Beyond storage, Dell is looking to start open sourcing more code and putting it up on GitHub. Don’t expect a deluge right off the bat but hopefully over time you will start seeing more and more.
Extra-credit reading
The Container Wars Are Under Way As Dell, And Others, Add Flocker Support – Tom’s IT Pro
Dell works with Cluster HQ to allow Docker containers to leverage Dell Storage – Dell4Enterprise
Yesterday at Dell World, Dell’s annual customer event, I did a session entitled: DevOps, Containers and Microservices: Buzzwords or fundamental to survival?
The idea was to explain these concepts, show how they serve as a foundation for digital transformation and talk about where Dell plays in the space. (see abstract below)
Topics and times
2:20 – 5:54 What is DevOps?
6:58 – 9:30 What are containers?
10:24 – 12:30 What are microservices?
12:30- 15:00 Where does Dell play? (professional services, testing, creating MVPs)
Check it out.
Abstract:
Gartner believe that by 2016, DevOps will evolve from a niche strategy employed by large cloud providers to a mainstream strategy employed by 25% of the largest 2000 global organizations [1]. One of the key developments within this space is Container technologies. In turn both DevOps and container technologies are proof of a larger shift in IT to a microservices architecture.
These technologies together serve as the foundation for agility and responsiveness in the modern enterprise. They give organizations an increased ability to serve their customers and, more importantly, are ultimately key to organizational survival in the modern world . This session will explain these technologies in terms of what they mean to your business and how they fit within larger trends in the industry.
[1] Tech Go-to-Market: How to win with DevOps buyers, May 15, 2015; Gartner
Back at the end of April I gave an internal presentation laying out a high level overview of the container and container management space. I pulled this together using public info.
Needless to say big things have happened since I created it, most notably the announcement of the Open Container Project. That being said it I feel it still offers a good general feel for the players and how they fit together.
Announced today under the Linux Foundation banner, the Open Container Project has the backing of the major forces in cloud and containers, including Docker and appc. – ZDnet
New Open Container Project Helps Define the Future Data Center – Jim Zemlin’s blog, the Linux Foundation
Last but not least here is the final video from DevOps days Austin featuring the one and only John Willis aka Botchagalupe. John gave the closing key note using an intriguing comparison to Jared Diamond’s Guns, Germs and Steel.
Take a listen:
Some of the ground John covers:
John’s DevOps background
From Socketplane founder to Docker employee
The convergence of data gravity, containers and microservices – The new guns, germs and steel
How John got involved with the Docker folks back in the dotCloud days.
This morning a group of us here at Dell met with Ben Golub, Jerome Petazzoni and Nick Stinemates of dotCloud, the company behind the wildly popular open source project, Docker, “the Linux container engine.” They came to sample the great barbecue and to chat about how Docker might potentially work with Project Sputnik, the Crowbar Project and a few other efforts.
Docker, which went live in March already has 150 contributors, 60,000+ downloads and 1000s of applications containerized and uploaded to their registry. Given the fact that the company only has 18 employees, quite a bit of this work has been done by the passionate community that has formed in the first six months.
Overview and Tech talk
I did two interviews with the gents from Docker, a higher level overview with Ben their CEO and a more technical talk with SRE manager Jerome and Nick, their sales and deployment engineer. Enjoy!
Some of the ground Ben covers:
What is Docker?
How it developed out of dotCloud’s PaaS efforts
How Ben got involved with the project and his background
What are dotCloud’s plans for Docker and who is integrating with it?
Some of the ground Jerome and Nick cover:
How long they’ve been involved and what they focus on
How Docker works with LXC and how it might work without LXC
Ubuntu is recommended but all you need is AUFS support
In next release they plan to offer official support beyond Ubuntu
Holy DevOps batman, Docker has something to offer Devs, QA Engineers, Continuos integration and Sys Ops.
Last week at VMworld, Dell held a Super session where we debuted a video walking through our Modular Data Center (MDC). The group that I belong to, Data Center Solutions (DCS), created the MDC as a custom solution addressing the specific needs of a few of our big strategic customers.
(As background, the DCS group has been acting as a custom tailor to the “internet superstars” for over three years and we address customers’ needs by focusing on innovation from the individual node all the way through the data center itself.)
Don’t box me in
In the video you’ll notice that gone is the forced shipping container form factor and in its place, as the name implies, is a more efficient modular design that lets you mix and match components like Legos.
Take a look below as Ty Schmitt, the lead architect for modular infrastructure, literally walks you through the concept and gives you his insight behind the design:
[Spoiler Alert!] Some of the points Ty touches on:
A Module takes up half the space of a traditional data center
Clip on modules let you add capacity as you grow
There are 6-12 racks per module or 2500 servers which you can access through a central hallway
The modules come pre-integrated, pre-configured and pre-tested
With a modular data center you get a single point for power, a single point for IT, and a single point for cooling as opposed to the 1000s of points you’d normally get