Dell’s Data Center Solutions group (DCS) is no longer a toddler. Over the weekend we turned four!
Four years ago on March 27, 2007 Dell announced the formation of the Data Center Solutions group, a special crack team designed to service the needs of hyperscale customers. On that day eWeek announced the event in their article Dell Takes On Data Centers with New Services Unit and within the first week Forrest Norrod, founding DCS GM and currently the GM of Dell’s server platform division, spelled out to the world our goals and mission (in re-watching the video its amazing to see how true to that mission we have been):
If you’re not familiar with the DCS story, here is how it all began. Four years ago Dell’s Data Center Solutions team was formed to directly address a new segment that begin developing in the marketplace, the “hyperscale” segment. This segment was characterized by customers who were deploying 1,000s if not 10,000s of servers at a time.
These customers saw their data center as their factory and technology as a competitive weapon. Along with the huge scale they were deploying at, they had a unique architecture and approach specifically, resiliency and availability were built into the software rather than the hardware. As a result they were looking for system designs that focused less on redundancy and availability and more on TCO, density and energy efficiency. DCS was formed to address these needs.
Working directly with a small group of customers
From the very beginning DCS took the Dell direct customer model and drove it even closer to the customer. DCS architects and engineers sit down with the customer and before talking about system specs they learn about the customer’s environment, what problem they are looking to solve and what type of application(s) they will be running. From there the DCS team designs and creates a system to match the customer’s needs.
In addition to major internet players, DCS’s customers include financial services organizations, national government agencies, institutional universities, laboratory environments and energy producers. Given the extreme high-touch nature of this segment, the DCS group handles only 20-30 customers worldwide but these customers such as Facebook, Lawrence Livermore National Labs and Microsoft Azure are buying at such volumes that the system numbers are ginormous.
Expanding to the “next 1000”
Ironically because it was so high-touch, Dell’s scale out business didn’t scale beyond our group of 20-30 custom customers. This meant considerable pent up demand from organizations one tier below. After thinking about it for a while we came up with a different model to address their needs. Leveraging the knowledge and experience we had gained working with the largest hyperscale players, a year ago we launched a portfolio of specialized products and solutions to address “the next 1000.”
The foundation for this portfolio is a line of specialized PowerEdge C systems derived from the custom systems we have been designing for the “biggest of the big.” Along with these systems we have launched a set of complete solutions that we have put together with the help of a set of key partners:
Dell Cloud Solution for Web Applications: A turnkey platform-as-a-service offering targeted at IT service providers, hosting companies and telcos. This private cloud offering combines Dell’s specialized cloud servers with fully integrated software from Joyent.
Dell Cloud Solution for Data Analytics: A combination of Dell’s PowerEdge C servers with Aster Data’s nCluster, a massively parallel processing database with an integrated analytics engine.
Dell | Canonical Enterprise Cloud, Standard Edition: A “cloud-in-a-box” that allows the setting up of an affordable Infrastructure-as-a-Service (Iaas)-style private clouds in computer labs or data centers.
OpenStack: We are working with Rackspace to deliver an OpenStack solution later this year. OpenStack is the open source cloud platform built on top of code donated by Rackspace and NASA and is now being further developed by the community.
These first four years have been a wild ride. Here’s hoping the next four will be just as crazy!
This week, outside of Frankfurt, WorldHostingDays is taking place. A whole delegation of folks from the Data Center Solutions group is there to support the announcement of our new microserver line. A lot of our key partners are there as well. One such partner is AMD.
Earlier today, AMD director of product marketing John Fruehe held a session entitled “Core Scalability in a cloud environment.” Above is a three minute section where John talks about the three AMD-based systems that are part of the PowerEdge C line:
The PowerEdge C5125 microserver which we announced yesterday
The PowerEdge C6105 optimized for performance per watt per dollar.
Monday I wrote about the announcement of our mega-beefy, 96-core PowerEdge C6145 server, specifically geared to customers solving big problems involving huge and complex data sets in mapping, visualization, simulations and rendering.
At the other end of the spectrum however are customers, such as those offering low-end dedicated hosting solutions, who are looking for systems with only enough processing and storage to serve up straight-forward, focused applications such as those for serving up webpages, streaming video etc. These “right-sized” systems are referred to as “micro” or “light weight” servers.
Take a listen to Data Center Solutions marketing director Drew Schulke below as he explains the origin of the microserver and walks you through our second generation offering in this space.
Some of the area Drew covers:
How did Dell get into the microserver market 2-3 years ago
How the progression of Moore’s law caused processing power to out strip the needs of many applications.
A walk through of our second generation microserver which packs 12 single socket servers into one 3Uenclosure.
We will continue to be making noise in this space. Be sure to tune in next time as our topic will be a mini “case study” on Dell’s first generation microserver deployed at a large hoster in France.
Last November, Dell announced the Dell Cloud Solution for Web Applications. This turnkey offering is composed of Dell systems and Joyent Software along with a reference architecture all supported by Dell services. This solution enables a private Platform as a Service (PaaS) environment to support the development and testing of languages such as PHP, Perl, Python, Ruby and Java.
This solution is designed for hosters and telcos who are looking to provide public PaaS offerings. An example of this is Uniserve, a Canadian Internet services provider. Uniserve has adopted the Dell Cloud Solution for Web Applications to offer on-demand access to a high-performance Internet application and consumer delivery platform, for customers to develop iPhone apps to commercial storefronts, to hosting and delivering Software-as-a-Service.
Check out the short video above where Dell Data Center Solutions architect Brian Harris provides a high level overview of the Dell Cloud Solution for Web Applications architecture.
A while ago, as a follow-up to our white paper “Laying the Groundwork for Private and Public Clouds” Dell and Intel worked with CIO magazine to put together a Tech Dossier that picked up where our previous paper left off.
Here are a few paragraphs from the white paper to whet your appetite:
For many enterprises, building a private cloud is simply the next step on an evolutionary path that began with data center consolidation. When a company has established a strong virtualization underpinning and is working with traditional enterprise applications, an evolutionary approach to the private cloud makes perfect sense…
In some instances, however, taking what Dell refers to as a “revolutionary” approach to private clouds will be more efficient and much more appropriate. The revolutionary approach makes use of “new world” applications that are written for and deployed in the cloud. These cloud-native applications are designed from the ground up for greater scalability and use across a multitude of servers…
The revolutionary approach requires a new way of thinking about the cloud, but one that Van Mondfrans says enterprise IT executives should undertake sooner rather than later. “This is where the application paradigm is going,” he says…
You can access the document here (no registration required :).
Earlier this month an interview I did with Robert Duffner, Director of Product management for Windows Azure, went live on the Windows Azure team blog. Robert asked me a variety of questions about Cloud security, how I see the Cloud evolving, the pitfalls of the cloud, where Dell plays etc.
I was pleasantly surprised to see that my ramblings actually turned out coherent 🙂 Here is a section from the interview (you can check out the whole piece here):
Cloud computing is a very exciting place to be right now, whether you’re a customer, an IT organization, or a vendor. As I mentioned before, we are in the very days of this technology, and we’re going to see a lot happening going forward.
In much the same way that we really focused on distinctions between Internet, intranet, and extranet in the early days of those technologies, there is perhaps an artificial level of distinction between virtualization, private cloud, and public cloud. As we move forward, these differences are going to melt away, to a large extent.
That doesn’t mean that we’re not going to still have private cloud or public cloud, but we will think of them as less distinct from one another. It’s similar to the way that today, we keep certain things inside our firewalls on the Internet, but we don’t make a huge deal of it or regard those resources inside or outside as being all that distinct from each other.
I think that in general, as the principles of cloud grab hold, the whole concept of cloud computing as a separate and distinct entity is going to go away, and it will just become computing as we know it.
Dell’s Data Center Solutions (DCS) group focuses on customers operating huge scaled out environments. Given the number of systems deployed in these environments we are always looking for ways to take energy out of our systems. A half a watt here, a half a watt there means big energy savings when multiplied across a hyper scale environment and translates into lower costs to our environment and to our customers’ operating budgets.
Recently we have adopted Samsung’s low voltage DIMMs (“Green DDR3”) in our efforts to drive efficiencies. Take a listen to DCS’s Executive Director of engineering and architecture, Reuben Martinez, in the video below as he walks you through how a seemingly small decrease in DIMM voltage can translate to millions of dollars of savings in hyper scale environments.
Some of the ground Reuben covers:
How much energy US data centers consume and how this has grown.
What is happening to the cost of energy (hint: its going up:).
How our PowerEdge C6105 is designed for power efficiency including utilizing Samsung’s low-voltage memory. (BTW, Samsumg’s Green DDR3’s are also available in our C1100, C2100 and C6100)
The amount of power consumed by memory compared to the CPU (you may be surprised)
[2:35] The TCO calculation that shows the savings that low voltage DIMMs can provide in a typical data center environment.
One of the featured speakers during the kick off of the OpenStack design summit yesterday was NASA CTO of IT, Chris Kemp. OpenStack is an open source cloud platform and the compute side of the project is based on code from NASA’s Nebula cloud.
I got some time with Chris and learned about NASA’s involvement in the project:
Some of the ground Chris covers:
Nebula and the cloud computing platform code base
NASA’s huge data needs and what they do with the data
Serendipity: NASA’s cloud engine + Rackspace’s file system engine
How NASA is working with the project: a two-way street
Yesterday morning I made the drive down to San Antonio for OpenStack’s second design summit (and first open to the public). If you’re not familiar with OpenStack, its an open source cloud platform founded on contributed code from Rackspace and NASA’s Nebula cloud. The project was kicked off back in July at an inaugural design summit held in Austin.
The project has picked up quite a bit of momentum in its first four months. Attending this week’s 4-day conference are close to 300 people, representing 90 companies, from 12 countries. The event is broken into a business track and design track (where actual design decisions are being made and code is being written).
Powering the Install Fest
For the project Dell has sent down a bunch of PowerEdge C servers which have been set-up upstairs on the 5th floor. OpenStack compute has been installed on the two racks of servers and are up and running. Tomorrow, coders will get access to these systems during the install fest. During the fest attendees will each be given a virtual machine on the cloud to test and learn about installing and deploying OpenStack to the cloud.
I got Bret Piatt, who handles Technical Alliances for OpenStack, to take me on a quick tour of the set-up. Check it out:
Featuring: Brett Piatt, PowerEdge C1100, C2100, C6100 and C6105
Ironically when Dell, the company that built its success around supply chain management excellence, started the Data Center Solutions group to serve the “biggest of the big,” supply chain and procurement were just bit players.
I recently sat down with Chris Thompson who heads up the DCS supply chain and procurement organization and learned how this changed and the importance of his group in meeting the needs of this very unique customer set.
Some of the topics Chris tackles:
How his group helps customers get to revenue faster.
To what extent the DCS supply chain org is independent of Dell “normal” processes and procedures and to what extent it leverages them.
How has Chris’ group affected traditional Dell supply chain practices.
In my last entry I talked about how Steve Perlman, CEO and founder of OnLive joined the recent press round table we had in New York. OnLive is a cloud-based gaming company that launched earlier this year and whose servers were custom built by Dell’s Data Center Solution (DCS) group.
To give you a bit more insight into how the two companies worked together, here is a short video with Bruce Grove, OnLive’s director of strategic relations talking about the relationship between Dell and OnLive.
Some of the ground Bruce covers:
The value, as a start up, of working with someone who knows how to do supply chain, logistics and build tons of servers.
Working together as a team to design the servers (engineering teams on both sides as well as manufacturing teams).
Timothy Prickett Morgan of everyone’s favorite vulture-branded media site The Register attended a round table discussion we held a few weeks ago in New York. His piece from that event, which was focused around the cloud, was posted yesterday.
You should check out the whole article but here are some snippets to whet your appetite:
What DCS is all about
For the past several years – and some of them not particularly good ones – Dell’s Data Center Services (DCS) bespoke iron-making forge down in Round Rock, Texas, has been a particularly bright spot in the company’s enterprise business.
The unit has several hundred employees, who craft and build custom server kit for these picky Webby shops, where power and cooling issues actually matter more than raw performance. The high availability features necessary to keep applications running are in the software, so you can rip enterprise-class server features out of the boxes – they are like legs on a snake.
How we’re working with web-based gaming company OnLive
“These guys took a bet on Facebook early, and they benefited from that,” says Perlman [OnLive Founder and CEO]. “And now they are making a bet on us.”
OnLive allows gamers to play popular video games on their PCs remotely through a Web browser and soon on their TVs with a special (and cheap) HDMI and network adapter. The games are actually running back in OnLive’s data centers, and the secret sauce that Perlman has been working on to make console games work over the Internet and inside of a Web browser is what he called “error concealment”.
DCS had to create a custom server to integrate their video compression board into the machine, as well as pack in some high-end graphics cards to drive the games. Power and cooling are big issues. And no, you can’t see the servers. It’s a secret.
Last week a couple of us went down to San Antonio to help represent the OpenStack project at Rackspace’s partner summit. While there I met up with the VAR Guy. Mr. Guy got me chatting about Dell’s Data Center Solutions group, where we’ve been and where we’re going. Below is the resulting video he put together featuring myself and San Antonio’s greenery. (See the original article this came from).
Some of topics I tackle:
How Dell’s Data Center Solutions Group is designing servers for high-end cloud computing
How Dell is integrating hardware with software in cloud servers
Coming soon: Dell Cloud Solution for Web Applications/Leveraging Joyent‘s software
One of the key ingredients for the success of any open source project is a strong community manager. Coming on board to fill that role for the not-quite three-month-old OpenStack project is Stephen Spector. (If you’re not familiar with OpenStack, it’s an open source cloud platform).
Stephen made his first public appearance in his new role today at the Rackspace partner summit in San Antonio. I was able to catch Stephen first thing this morning before the summit kicked off.
Some of the ground Stephen covers:
His background: 14 yrs at Citrix. He initially ran developer alliance programs. He spent the last 3yrs running the Xen.org community.
Why Stephen joined OpenStack (he jumped at the chance to build a community from scratch).
He sees his role as that of a communication conduit
One of his first tasks is to find out who makes up the community e.g. developers, users, students, research, partners..
He’s very interested in making events like next months design summit successful as well as the importance of globalization.
A couple of days ago Bret Piatt, who handles Technical Alliances for OpenStack, came up to Austin to have further discussion with our team’s software engineers around OpenStack. If you’re not familiar with OpenStack, it is an open source cloud platform founded on contributed code from Rackspace and NASA’s Nebula cloud.
The project was kicked off a couple of months ago at an inaugural design summit held here in Austin. The summit drew over 25 companies from around the world, including Dell, to give input on the project and collectively map out the design for the project’s two main efforts, Cloud Compute and Object Storage.
Since the summit, and the project’s subsequent announcement the following week at the OSCON Cloud Summit, the community has been digging in. The first object storage code release will be available this month and the initial compute release, dubbed the “Austin” release, is slated for October 21. Additionally, the second OpenStack Design Summit has been set for November 9-12 in San Antonio, Texas, and is open to the public.
OpenStack visits Dell
During Bret’s visit to Dell he met with a bunch of folks including two of our software architects, Greg Althaus and Rob Hirschfeld. The three talked about how things were going with the project since the summit as well as specific ways in which Dell can contribute to the OpenStack project.
Below you can see where I crashed the three’s whiteboard session and made them tell me what they were doing. I then followed them, camera in hand, down to the lab where Greg and Rob showed Bret the system that we have targeted for running OpenStack.
Some of the topics (L -> R) Bret, Greg and Rob touch on:
Bret: Getting ready for the object storage release in September and compute in October. Looking to get the right hardware spec’d out so that people can start using the solution once its released.
Rob: Learning about how the project is coming together since the design summit. Interested in how the 3 code lines, storage, NASA compute and Rackspace compute, along with the input that was gathered at the Design summit and community input, are coming together.
Greg and Rob take Brett to the lab to show him the C6100 which could be a good candidate for open stack.
Next step, getting OpenStack in the lab and start playing with it.
iland is a provider of cloud computing infrastructure with high-availability data centers specifically designed for cloud computing in Boston, Washington D.C., Houston, Los Angeles, Dallas, and London. To stay competitive in the cloud infrastructure business, iland needs to gain as much value as possible from every watt of power and every square foot of data center floor space.
One day over lunch they were introduced to the PowerEdge C6105 and were impressed with the product’s density and power efficiency. One thing led to another and here is iland’s CTO Justin Giardina talking about why they are so interested int the PowerEdge C6105.
Some of the points Justin makes:
The primary reasons iland wanted to talk to Dell about the 6105 were density and power draw.
iland can stack 20 servers in one cabinet and since each 6105 has 4 servers in it by filling the cabinet with 6105s they can in affect get 80 servers (4X the compute power per cabinet).
The system only pulls 3 amps from both power supplies.
A couple of weeks ago a group from salesforce.com paid a visit to Dell. Among other things, they came to discuss their new product “Chatter” that Dell has recently launched internally and who’s virtues Michael Dell has tweeted. Among the salesforce crew was Sean Whiteley, VP of product marketing. I was able to get some time between meetings with Sean and learn more about Chatter.
Some of the topics Sean tackles:
How Chatter has done since its launch on June 22. What type of traction they’ve seen with customers.
How Chatter differs from other internal social media platforms (hint: not only can you follow people; records, objects and information within your business applications have feeds as well, e.g. your notified when a presentation changes or a sales deal you’re following moves to a different stage.)
How the idea of Chatter came up. What role chairman Marc Benioff and his use of Facebook played.
Currently Chatter is tied closely to CRM but it will be tied to other apps going forward.
They believe that many more folks will use Chatter than usesalesforce.com.
J.P. Rangaswamy is British Telecom’s chief scientist and a very interesting fellow. At the Cloud Summit at OSCON last month he delivered a talk on the future of the cloud. I was quite intrigued so I grabbed him during the break to learn a bit more about a few of the concepts he presented.
Some of the topics that J.P. tackles:
Many of the best utilities we’ve built: Internet, Web, wireless environment etc are built on fundamentally frail best-effort infrastructures.
In order to gain predictability you sacrifice a lot of the original value eg. QWERTY
He’s not against SLAs, J.P.’s against the throwing away of value under the guise of false beliefs in SLAs.
What is the key area in the cloud that needs to be shored up? Interoperability. Security is overplayed, just look at the development of the Web.
Need to concentrate on federation as a mind set — the ability to create services that are daisy chaining select pieces from a variety of each, vs integrated vertical stacks.
He’s worried about SLAs because the things people are doing to stop SLAs from being light weight are actually things that prevent interoperability.
The week before last our team decided to divide and conquer to cover two simultaneous events. Half of us headed to Portland, Oregon for OSCON and the other half stayed here in Austin to participate in hostingcon.
The Hostingcon keynote
Among those participating in hostingcon was my boss Andy Rhodes who gave the keynote on Tuesday. Here’s are the slides Andy delivered:
(If the presentation doesn’t appear above, click here to view it.)
The idea of the keynote was to share with hosters the five major lessons we have learned over the last several years working with a unique set of customers operating at hyperscale. Those five lessons are:
TCO models are not one size fits all. Build a unique model that represents your specific environment and make sure that you get every dollar of cost in there. Additionally, make sure that your model is flexible enough to accommodate new info and market changes.
Don’t let the status quo hold you back. Not adapting soon enough and delays in rolling out solutions can cost you dearly.
The most expensive server/storage node is the one that isn’t used (sits idle for 6-12 weeks) or the one you don’t have when you need it most.
Don’t let Bad Code dictate your hardware architecture.
Don’t waste time on “Cloud Washing.” Talk to your customers about real pain points and how to solve them.
So, it seems that cloud best practices will help companies reduce their physical infrastructure, which seems to be a bit counter-intuitive, given that Rhodes is representing a hardware provider. But it makes sense. Given the never-ending list of projects for IT staff, and as they drive down costs, their business will grow, and they’ll be able to increase their IT spend for innovative efforts. “What we’re hoping to do is let you do more with less.”
At OSCON last week I ran into a compadre from a previous life, Fred Kohout. Fred is now the CMO at UC4, a pure play software automation company, and he, like I, was in Portland to attend OSCON and the Cloud Summit.
At the summit Fred did to me what I’ve done to so many others, he got me on the receiving end of a video camera to talk about where Dell plays in the cloud and how we see the cloud evolving.
You can check out Fred’s blog from the Summit where he posted my video as well as the interview he did with another former compadre, Peder Ulander, CMO at cloud.com.
Don’t touch that dial
If you’re interested in OSCON be sure to stay tuned. I’ve got four more interviews from the event that I will be posting soon.