Cloud gamers OnLive on working with Dell DCS

October 24, 2010

In my last entry I talked about how Steve Perlman, CEO and founder of OnLive joined the recent press round table we had in New York.  OnLive is a cloud-based gaming company that launched earlier this year and whose servers were custom built by Dell’s Data Center Solution (DCS) group.

To give you a bit more insight into how the two companies worked together, here is a short video with Bruce Grove, OnLive’s director of strategic relations talking about the relationship between Dell and OnLive.

Some of the ground Bruce covers:

  • The value, as a start up, of working with someone who knows how to do supply chain, logistics and build tons of servers.
  • Working together as a team to design the servers (engineering teams on both sides as well as manufacturing teams).

Extra-credit reading:

Pau for now…


El Reg love: “Dell’s DCS is a big shiny server star”

October 19, 2010

Timothy Prickett Morgan of everyone’s favorite vulture-branded media site The Register attended a round table discussion we held a few weeks ago in New York.  His piece from that event, which was focused around the cloud, was posted yesterday.

You should check out the whole article but here are some snippets to whet your appetite:

What DCS is all about

For the past several years – and some of them not particularly good ones – Dell’s Data Center Services (DCS) bespoke iron-making forge down in Round Rock, Texas, has been a particularly bright spot in the company’s enterprise business.

The unit has several hundred employees, who craft and build custom server kit for these picky Webby shops, where power and cooling issues actually matter more than raw performance. The high availability features necessary to keep applications running are in the software, so you can rip enterprise-class server features out of the boxes – they are like legs on a snake.

How we’re working with web-based gaming company OnLive

“These guys took a bet on Facebook early, and they benefited from that,” says Perlman [OnLive Founder and CEO]. “And now they are making a bet on us.”

OnLive allows gamers to play popular video games on their PCs remotely through a Web browser and soon on their TVs with a special (and cheap) HDMI and network adapter. The games are actually running back in OnLive’s data centers, and the secret sauce that Perlman has been working on to make console games work over the Internet and inside of a Web browser is what he called “error concealment”.

DCS had to create a custom server to integrate their video compression board into the machine, as well as pack in some high-end graphics cards to drive the games. Power and cooling are big issues. And no, you can’t see the servers. It’s a secret.

Extra-credit reading:

Pau for now…


Welcome the uber-dense AMD-based Cloud/ HPC machine

September 13, 2010

The last couple of Dell Data Center Solutions offerings I’ve talked about, Viking and MDC, have been from the custom side of the house.  Both of these solutions are targeted specifically at a few select large customers.

The subject of today’s post however, the PowerEdge C6105 server, is available to anyone running a scaled out environment.  It, alongside the recently available C410X expansion chassis, represent the latest additions to the PowerEdge C line that we launched back in March.

Efficiency is its middle name

Designed to maximize performance per watt per dollar, the C6105 is ideal for energy and budget constrained scale-out environments.  Targets include: Scale-out Web 2.0, hosting, and HPC applications where core count and power efficiency are the priority.

Want a closer look? Click below and product manager Steve Croce will give you a quick overview.

Some of the points Steve touches on:

  • The 6105 is very dense: essentially four servers in a 2U chassis
  • The system leverages “shared infrastructure,” e.g two power supplies for all four servers,  four 2U fans to cool it, etc., which results in weight and power savings and allows for an extremely dense system.
  • The 6105 features the Opteron 4000 series which are focused on power efficiency
  • It holds 12 3.5 inch disks.  Each server gets 3 disks.

Extra-credit reading:

Pau for now…


Enter the Viking: Light Weight server for Hosting and Web 2.0

September 12, 2010

Over the last few years, we have been working with some of the world’s biggest hyperscale data center operators, folks who are deploying thousands, to tens of thousands of servers at a time. Within this select group, the theme that keeps coming up over and over is uber-efficiency.

The customers that we’ve been working with in areas like Web 2.0 and hosting  require solutions that are not only extremely dense, but also dramatically drive down costs.  When operating at the scale that these organizations do, ultra-efficiency is not a nice to have; it’s one of the most important tools the organization has to drive profitability.

It is with these customers and with their need for ultra-efficiency in mind that we designed the newest edition to our custom light-weight server line-up: Viking, designed to “pillage” inefficiency  🙂

Some of the points Ed touches on:

  • Viking can hold eight or 12 server nodes in a 3U chassis
  • Each node is a single socket server with up to 4 hard drives & 16GB of RAM along with two gigabit ethernet ports
  • It supports Intel’s Lynnfield or Clarkdale processors which means its 2-4 core’s per processor
  • The chassis also features an integrated switch and includes shared power and cooling infrastructure
  • The system is cold-aisle serviceable which means everything you need to get to is right in the front.

Related Reading:

Pau for now…


Dell’s Modular Data Center — Hello World

September 9, 2010

Last week at VMworld, Dell held a Super session where we debuted a video walking through our Modular Data Center (MDC).  The group that I belong to, Data Center Solutions (DCS), created the MDC as a custom solution addressing the specific needs of a few of our big strategic customers.

(As background, the DCS group has been acting as a custom tailor to the “internet superstars” for over three years and we address customers’ needs by focusing on innovation from the individual node all the way through the data center itself.)

Don’t box me in

In the video you’ll notice that gone is the forced shipping container form factor and in its place, as the name implies, is a more efficient modular design that lets you mix and match components like Legos.

Take a look below as Ty Schmitt, the lead architect for modular infrastructure, literally walks you through the concept and gives you his insight behind the design:

[Spoiler Alert!] Some of  the points Ty touches on:

  • A Module takes up half the space of a traditional data center
  • Clip on modules let you add capacity as you grow
  • There are 6-12 racks per module or 2500 servers which you can access through a central hallway
  • The modules come pre-integrated, pre-configured and pre-tested
  • With a modular data center you get a single point for power, a single point for IT, and a single point for cooling as opposed to the 1000s of points you’d normally get

Extra-credit reading

Pau for now…


Deep dive tour(s) of the PowerEdge C410x

August 5, 2010

In my last entry I talked about the wild and wacky world of GPGPUs and provided an overview of the PowerEdge C410x expansion chassis that we announced today. For those of you who want to go deeper and see how to set up and install this 3U wonder you’ll want to take a look at the three videos below.

  1. Card installation: How to install/replace a NVIDIA Tesla M1060 GPU card in the PowerEdge C410x taco.
  2. Setting up the system: How to set up the PowerEdge C410x PCIe expansion chassis in a rack, power it up and pull out cards.  Also addresses port numbering.
  3. BMC card mapping: How to map the PCIe cards in the PowerEdge C410x via the BMC web interface.  Also covered are how to monitor power usage, fans and more.

Happy viewing!  (BTW, the C410x’s code name was “titanium” so when you hear Chris refer to it as that don’t be thrown)

Extra-credit reading:

Pau for now…


Say hello to my little friend — packing up to 16 GPGPUs

August 5, 2010

While the name GPGPU, which stands for General-purpose computing on graphics processing units, doesn’t flow lyrically off the tongue, it’s an extremely powerful concept.

What’s the big idea?

The idea behind this sexy five letter acronym is to take a graphical process unit (GPU) and expand its use beyond graphics.  Through the “simple” addition of programmable stages and higher precision arithmetic to the rendering pipelines, the GPU is able to tackle general computing and off load it from the CPU.

So what does this mean and/or why should you care?  Well the connection of GPGPUs to servers bring about ginormous increases in performance helping to make HPC and scaled-out deployments wicked fast.  This works particularly well when you’re talking about modeling, simulation, imaging, signal processing, gaming etc.  Not only can the addition of GPGPUs boost these processes by one or two orders of magnitude but it does so much more cost effectively than by simply adding servers.

What is Dell’s DCS group offering up?

The Data Center Solutions (DCS) team have an Oil & Gas customer that is always looking to push the envelope when it comes to getting the most out of GPGPU’s in order to deliver seismic mapping results faster.  One of the best ways to do this is by increasing the GPU to server ratio.  In the market today, there are a variety of servers that have 1-2 internal GPUs and there is a PCIe expansion chassis that has 4 GPUs.

What we announced today is the PowerEdge C410x PCIe expansion chassis, the first PCIe expansion chassis to connect 1-8 servers to 1-16 GPUs.  This chassis enables massive parallel calculations separate from the server, adding up to 16.48 teraflops of computational power to a datacenter.

But enough of my typing, see for yourself in the overview/walk-thru below starring DCS’s very own Joe Sekel, the architect behind the C410x.

Extra-credit reading

Pau for now…


Cool Article on the Dell/Azure announcement

July 14, 2010

Monday, as part of Microsoft’s big Azure announcement, we announced that we would be both building an Azure appliance, enabling customers to build their own public or private clouds, as well as developing an Azure public cloud at Dell that our customers can use to develop and deploy next generation services on.

There has been a ton of press surrounding this move by Microsoft to broaden the market for Azure, an effort which also includes similar agreements with HP and Fujitsu. Not surprisingly, my favorite article is one by Charles King that came out yesterday in eCommerce Times — Microsoft’s Windows Azure and Dell: Blue Skies Ahead.

Check out these excerpts and you’ll see why 🙂

Dell is out of the blocks and running with Azure while its rivals are still sorting out their gym bags.

Dell’s cloud efforts tend to be one of the company’s best kept secrets. Some vendors’ continual cloud pronouncements tend to blend into a vuvuzela-like drone, but Dell has simply gotten down to the hard work of building workable commercial cloud and hyper-scale data center solutions during the past three years.

In fact, Dell was the first major vendor to launch a business unit specifically focused on the commercial cloud. By doing so, the company’s Data Center Solutions (DCS) organization has gained invaluable hands-on expertise about the specialized needs of organizations leveraging cloud technologies for applications including hosting, HPC, Web 2.0, gaming, energy social networking and SaaS. That point likely influenced Microsoft’s 2008 decision to choose Dell as a primary infrastructure partner in developing the Azure platform.

Cool stuff!

Pau for now…


Chocolate covered servers?

June 22, 2010

Is that a heat sink under the Laffy Taffy?

There was a great article about Dell’s Data Center Solutions group that came out a couple of weeks ago.  The article, entitled “Willy Wonka and the Dell Factory,” starts out

If Dell’s cloud server lab is a candy shop for geeks, littered with components and exotic system designs, then Jimmy Pike is the Willy Wonka of servers.

The author then takes the reader on a tour of the top secret Dell Cloud lab explaining,

Like Willy Wonka in the book by Roald Dahl, Pike’s job is to combine ingredients in new and sometimes radical ways. Instead of chocolate and blueberries, his ingredients are chips, fans and motherboards. “Sometimes we bend metal and put boards together with duct tape,” he said…

Servers became “boring” for a while, Pike said, but the requirements of cloud computing have made his job interesting again. “I’ve been doing this for 30 years and I’m having more fun than I’ve ever had,” he said.

And if Jimmy’s having fun, that’s a good thing for everyone. 🙂

Read the whole article here

Want more Jimmy? Check out his data center in a suitcase.

Pau for now…


InfoWeek: Dell DCS unit racking up cloud sales

January 18, 2010

There was a good article in Information Week last week with our GM, Forrest Norrod.  Forrest talked to Charlie Babcock about the success that Dell’s Data Center Systems unit has had in the cloud space.

You should check out the whole article but here are a few bits I’ve pulled out for your reading pleasure:

  • Dell’s Data Center Solutions unit, has only 20 customers, but would be the third largest supplier of x86 servers in the U.S. if it were split out from Dell, said Forrest Norrod, the unit’s VP and general manager, in an interview. The only companies ahead it in shipping Intel or AMD servers would be HP and Dell itself.
  • This foray into cloud computing is somewhat contrary to Dell’s previous pattern of applying sophisticated supply chain logistics to well-worn grooves in the business and consumer computing markets. For one thing, Dell, until recently, hasn’t talked about it. For another, it’s built a business unit that refuses to address the mass market at all.
  • Norrod acknowledged what other Dell officials said as well: the lessons learned in producing servers for the big Internet service providers will be used when enterprise customers knock on Dell’s door to discuss how to build out their private clouds. “Dell will bring the capabilities from DCS to the mass market,” he said
  • “Interest [in private cloud computing] is spiking through the roof,” [Norrod] said, and he predicted most new enterprise applications will be designed to run in the cloud, whether public or private. Such applications are built with scalability in mind and can take advantage of the ability of the cloud to generate more virtual machines on demand.

Stay tuned for more 🙂

Pau for now…