April 26, 2016
Yesterday here at the OpenStack summit here in Austin I caught a few of the sessions in the track that Canonical was hosting. One of the sessions dealt with Canonical’s LXD and where it fits into the whole virtualization/container space.
The talk was given by Dustin Kirkland and after he had finished, I grabbed him to explain the basics of LXD and the landscape it fits within.
Have a listen
Some of the ground Dustin covers:
- What is LXD and how is it different from virtual machines and containers
- How LXD acts like a hypervisor but is fundamentally a container
- Application containers vs Machine containers
- Applications containers like Docker host a single proccess on a filesystem
- Machine containers from LXD boot a full OS on their filesystems
- Where do microservices fit in this model
- How Docker and LXD are complementary
- 16.04LTS ships with LXD
Pau for now…
January 8, 2012
Here is the last in a series of three short videos around cloud computing put together by Dell and Intel. As I mentioned in the last two entries, these videos are part of larger series around key topics like IT reinvention, the consumerization of IT, social media etc.
This last video features myself, Dell’s former CIO Robin Johnson, VP of Dell’s Enterprise Solutions and Strategy, Praveen Asthana and Donna Troy, VP and GM of Solutions Marketing and Sales at Dell.
Some of the ground we cover
- How we define cloud computing
- How quickly can you evolve to cloud?
- How do you balance your current environment with cloud
- Starting your cloud building from a basis of virtualization
Extra credit reading
Pau for now…
February 4, 2011
Dell’s Data Center Solutions (DCS) group has some pretty colorful folks. One of the more interesting members is Jimmy Pike, the man IDG New’s James Niccolai refered to as the “Willy Wonka of servers.” Jimmy, the self-proclaimed “chief geek” of the DCS team is the consummate tinkerer whether that involves constructing a data center in a brief case or thinking of new ways of driving down data center power consumption by leveraging alternative forms energy.
Last Spring I visited Jimmy’s home to check out what he was working on in his “free time.” Here’s what I saw (he keeps telling me he’s got much cooler stuff since I shot this so I may have to do a “geekquel”)
Some of the things Jimmy show us:
- The low-power chips he’s playing with
- His experimentation with user interfaces and superman glasses
- His mini rack of servers
- The various forms of desktop virtualization and OS’s he uses
- Laying out and designing boards by mail
- His micro recording studio
Pau for now…
January 17, 2011
Earlier this month an interview I did with Robert Duffner, Director of Product management for Windows Azure, went live on the Windows Azure team blog. Robert asked me a variety of questions about Cloud security, how I see the Cloud evolving, the pitfalls of the cloud, where Dell plays etc.
I was pleasantly surprised to see that my ramblings actually turned out coherent 🙂 Here is a section from the interview (you can check out the whole piece here):
Cloud computing is a very exciting place to be right now, whether you’re a customer, an IT organization, or a vendor. As I mentioned before, we are in the very days of this technology, and we’re going to see a lot happening going forward.
In much the same way that we really focused on distinctions between Internet, intranet, and extranet in the early days of those technologies, there is perhaps an artificial level of distinction between virtualization, private cloud, and public cloud. As we move forward, these differences are going to melt away, to a large extent.
That doesn’t mean that we’re not going to still have private cloud or public cloud, but we will think of them as less distinct from one another. It’s similar to the way that today, we keep certain things inside our firewalls on the Internet, but we don’t make a huge deal of it or regard those resources inside or outside as being all that distinct from each other.
I think that in general, as the principles of cloud grab hold, the whole concept of cloud computing as a separate and distinct entity is going to go away, and it will just become computing as we know it.
Pau for now…
September 10, 2010
Light weight servers have been gathering steam recently. Targeted at focused markets like hosting and Web 2.0 they feature the old school architecture of placing one CPU per server and running one OS/application on that server. The new twist here is that they can pack up to 12 servers per one 3U enclosure.
Below, Dell Data Center Solutions chief architect Jimmy Pike takes us through a short whiteboard discussion on how Moore’s law has driven us to multi-core architectures and virtualization and how, in the case of very focused applications, that same law is bringing us back to the future.
Some of the points Jimmy makes:
- Given Moore’s law its implausible to continue to drive higher and higher clock rates. This has given rise to multi core architecture.
- Native demand of applications on servers hasn’t kept paced with Moore’s law. This has resulted in virtualizaton, allowing you in effect to run multiple servers on a single system.
- This same law is also driving us in the opposite direction, to light weight servers which feature a simple one server/one OS architecture in a very energy efficient, cost effective manner targeted at focused applications.
Extra-credit reading (more Jimmy Pike):
Pau for now…
March 22, 2010
Whether you believe in the Cloud or not, it’s coming. That being said it’s not a phenomenon that will fill skies of IT departments tomorrow, but rather it is starting out as another tool in IT’s bag of tricks. As time passes, cloud computing will increasingly become a greater part of the portfolio of compute models that IT departments manage, sitting alongside Traditional computing and Virtualization.
Cloud Computing Today
If you were to graph the distribution of compute models being used today by IT departments in large enterprises, it would look something like the chart below. Today, traditional computing and virtualization are where most of the distribution lies with a little bit of flirting with the Public Cloud in the case of SaaS applications for areas like HR, CRM, email etc. Private cloud is presently negligible.
Over the next three to five years
Over the next three to five years the above distribution will flatten out and shift to the right and will resemble the graph below. Private cloud will represent the largest compute model utilized but it will be equally flanked by virtualization and public cloud. You’ll notice there will still be a decent amount of resources that remain in the traditional compute bucket representing applications that are not worth the effort of rewriting or converting to a cloud platform.
Evolutionary Vs. Revolutionary
One of the things to note with this new distribution is that the lines between Virtualization and Private Cloud will start to blur (there will also be a blurring between Private and Public clouds as hybrid clouds become more of a reality in the future, but that’s another story for another time). There are two ways to go about setting up private clouds, evolutionary and revolutionary.
Tune in tomorrow and learn more about these two approaches and how they differ. 🙂
Pau for now…
January 7, 2010
Here is the second in my three part series on Virtualization and the cloud. Today’s entry focuses on the 800 pound gorilla in the virtualization space, VMware.
At last month’s Gartner’s Data Center conference, right after his standing room only presentation, I grabbed some time with VMware’s Mr. Cloud, Dan Chu . Hear what he had to say:
Some of the topics Dan tackles:
- What VMware is seeing customers actually doing to take advantage of the cloud today both with regards to public and private clouds.
- Some polling data he collected during his talk based on the ~300 folks who attended: 90-95% were virtualizing, 15% had an active private cloud project, 5-10% had a public cloud project. (This is pretty representative of what Dan’s generally seeing.)
- The three phases of cloud:
- Phase I: Standardizing and virtualizing an environment.
- Phase II: Adopting private cloud from a management stand point: getting to self service and automation in terms of provisioning a new service/collapsing the time it takes to get a new image out to an end user or developer from weeks to minutes/ implementing charge back, dynamic capacity planning and management.
- Phase III: Thinking about or planning how to leverage the public cloud in a fully compatible way.
- A short history of VMware: how they’ve moved from desktop and server virtualization to VM management and optimization to enabling their platform for private clouds and public cloud providers.
- Their “recent” acquisition of Spring Source and how it fits in.
Stay tuned next time for a summary of Gartner’s virtualization presentation from their data center conference.
Pau for now…
January 4, 2010
Happy New Year to all! For the first week of this new year I’m going to focus on virtualization and the cloud.
Kicking off this mini-series is an interview I did last month at the Gartner DataCenter conference with David Greschler, director of virtualization strategy at Microsoft. I caught up with David right after his talk at the conference.
Some of the topics David tackles:
- The ability to treat IT as a service. Before virtualization, specific workloads were tied to specific devices. Thanks to virtualization you can create pooled resources which is the beginning of IT as a service.
- Microsoft’s Dynamic Data Center Toolkit: This tool overlays on top of HyperV and System Center (their management tool) and allows you to look at and manage your own datacenter as a pool of compute power. It is a step towards the private cloud and can also be used by hosters. It will also allow for moving workloads between public and private clouds.
- Microsoft is focusing on giving you knowledge at the app level. System Center tells you whats going on inside not just at the hypervisor level.
- Windows Azure: a large scale cloud that you can use to build apps for and have hosted on this environment.
- The ability also to take workloads into Azure over time.
- Image based Management: Taking the technology of the desktop-targeted App V and applying it to the server. Will allow you to encapsulate apps and move them from one OS to another without having to re-install them. You will no longer have 1000s and 1000s of virtualized images that you will have to manage and monitor, instead you will very few golden images of these VMs and you will be able to simply put these workloads in and take them out.
Extra credit viewing:
Stay tuned next time for Dan Chu of VMware to hear what they are up to.
Pau for now…
December 11, 2009
Last month at the Interop/Web 2.0 I was able to drag Citrix’s Roger Klorese away from booth duty for an interview. Roger is a Sr. Director at Citrix who works on Xen server and the Essentials product family. Here is what he had to say:
Some of the topics Roger tackles
- What Roger has been focusing on this year — Free Xen server. Launching the offering (there have been 200K downloads this year)and then bringing more features into it. What comes with it for free and what are add-ons that you get thru the Essentials family.
- In the networking space Citrix announced a version of their netscaler app delivery server as a virtual appliance.
- Managing “OPVs” (other people’s VM’s)
- What Roger is most excited about:
- Growing the datacenter into the cloud — Xen.org recently released the Xen cloud platform which is a full cloud distro, with a management stack based on open sourcing the Xen server stack.
- Early next year they are releasing the Xen client type 1, a bare metal client hypervisor.
Pau for now…
November 16, 2009
A couple of weeks ago on the show floor of Cloud Computing Expo in Santa Clara I ran into Adam Hawley, Director of product management for Oracle VM. When Adam finished his stint in the Oracle booth he sat down with me to talk about what was going on at Oracle in the world of virtualization and the cloud.
Some of the topics Adam tackles:
- Oracle VM, Oracle’s sever virtualization and management platform, while based on Xen is all Oracle on top of it.
- The Virtual Iron acquisition which is in the process of being incorporated within the Oracle portfolio and is slated for release in 2010.
- The Cloud as a higher level of automation on top of virtualization, compared to what traditional virtualization has provided.
- Where Oracle will play in the cloud space (hint: think private).
- The Oracle assembly builder that Adam was showing off at the show.
- Given Larry’s views on cloud computing, is “cloud” a dirty word at Oracle?
Pau for now…
September 14, 2009
Last but not least in my series of video’s from last month’s Cloud World/Open Source World I present to you Ken Oestreich, VP of Product marketing at Egenera. I grabbed some time with Ken to learn about Engenera, the cloud and how they’re working with Dell.
Some of the topics that Ken tackles:
- While a hypervisor abstracts software, Egenera’s PAN manager abstracts the “plumbing” e.g. NIC cards, switches, host bus adaptor cards etc.
- PAN manager allows you to consolidate networks, fail-over entire machines and, in the case of disaster recovery, recover and reproduce entire compute environments.
- Egenera is working with Dell in the form of the Dell PAN system to provide agility in your infrastructure.
- This Infrastructure as a Service system can be used inside or outside your firewall.
- What developments Ken is most excited about in the upcoming year.
Pau for now…