Welcome the uber-dense AMD-based Cloud/ HPC machine

September 13, 2010

The last couple of Dell Data Center Solutions offerings I’ve talked about, Viking and MDC, have been from the custom side of the house.  Both of these solutions are targeted specifically at a few select large customers.

The subject of today’s post however, the PowerEdge C6105 server, is available to anyone running a scaled out environment.  It, alongside the recently available C410X expansion chassis, represent the latest additions to the PowerEdge C line that we launched back in March.

Efficiency is its middle name

Designed to maximize performance per watt per dollar, the C6105 is ideal for energy and budget constrained scale-out environments.  Targets include: Scale-out Web 2.0, hosting, and HPC applications where core count and power efficiency are the priority.

Want a closer look? Click below and product manager Steve Croce will give you a quick overview.

Some of the points Steve touches on:

  • The 6105 is very dense: essentially four servers in a 2U chassis
  • The system leverages “shared infrastructure,” e.g two power supplies for all four servers,  four 2U fans to cool it, etc., which results in weight and power savings and allows for an extremely dense system.
  • The 6105 features the Opteron 4000 series which are focused on power efficiency
  • It holds 12 3.5 inch disks.  Each server gets 3 disks.

Extra-credit reading:

Pau for now…


Enter the Viking: Light Weight server for Hosting and Web 2.0

September 12, 2010

Over the last few years, we have been working with some of the world’s biggest hyperscale data center operators, folks who are deploying thousands, to tens of thousands of servers at a time. Within this select group, the theme that keeps coming up over and over is uber-efficiency.

The customers that we’ve been working with in areas like Web 2.0 and hosting  require solutions that are not only extremely dense, but also dramatically drive down costs.  When operating at the scale that these organizations do, ultra-efficiency is not a nice to have; it’s one of the most important tools the organization has to drive profitability.

It is with these customers and with their need for ultra-efficiency in mind that we designed the newest edition to our custom light-weight server line-up: Viking, designed to “pillage” inefficiency  🙂

Some of the points Ed touches on:

  • Viking can hold eight or 12 server nodes in a 3U chassis
  • Each node is a single socket server with up to 4 hard drives & 16GB of RAM along with two gigabit ethernet ports
  • It supports Intel’s Lynnfield or Clarkdale processors which means its 2-4 core’s per processor
  • The chassis also features an integrated switch and includes shared power and cooling infrastructure
  • The system is cold-aisle serviceable which means everything you need to get to is right in the front.

Related Reading:

Pau for now…


Back to the Future with Light Weight Servers

September 10, 2010

Light weight servers have been gathering steam recently.  Targeted at focused markets like  hosting and Web 2.0 they feature the old school architecture of placing one CPU per server and running one OS/application on that server.  The new twist here is that they can pack up to 12 servers per one 3U enclosure.

Below, Dell Data Center Solutions chief architect Jimmy Pike takes us through a short whiteboard discussion on how Moore’s law has driven us to multi-core architectures and virtualization and how, in the case of very focused applications, that same law is bringing us back to the future.

Some of the points Jimmy makes:

  • Given Moore’s law its implausible to continue to drive higher and higher clock rates.  This has given rise to multi core architecture.
  • Native demand of  applications on servers hasn’t kept paced with Moore’s law.  This has resulted in virtualizaton, allowing you in effect to run multiple servers on a single system.
  • This same law is also driving us in the opposite direction, to light weight servers which feature a simple one server/one OS  architecture in a very energy efficient, cost effective manner targeted at focused applications.

Extra-credit reading (more Jimmy Pike):

Pau for now…


Dell’s Modular Data Center — Hello World

September 9, 2010

Last week at VMworld, Dell held a Super session where we debuted a video walking through our Modular Data Center (MDC).  The group that I belong to, Data Center Solutions (DCS), created the MDC as a custom solution addressing the specific needs of a few of our big strategic customers.

(As background, the DCS group has been acting as a custom tailor to the “internet superstars” for over three years and we address customers’ needs by focusing on innovation from the individual node all the way through the data center itself.)

Don’t box me in

In the video you’ll notice that gone is the forced shipping container form factor and in its place, as the name implies, is a more efficient modular design that lets you mix and match components like Legos.

Take a look below as Ty Schmitt, the lead architect for modular infrastructure, literally walks you through the concept and gives you his insight behind the design:

[Spoiler Alert!] Some of  the points Ty touches on:

  • A Module takes up half the space of a traditional data center
  • Clip on modules let you add capacity as you grow
  • There are 6-12 racks per module or 2500 servers which you can access through a central hallway
  • The modules come pre-integrated, pre-configured and pre-tested
  • With a modular data center you get a single point for power, a single point for IT, and a single point for cooling as opposed to the 1000s of points you’d normally get

Extra-credit reading

Pau for now…


Schlepping a 410x across Austin – A documentary

September 9, 2010

Last month we introduced the PowerEdge C410X expansion chassis which, when populated with GPGPUs and attached to a server, brings about ginormous increases in performance in a very cost effective manner.

A couple weeks after the system debuted NextIO, who creates and sells virtualized IO capabilities, was looking to qualify the machine in their lab located here in Austin.   Wanting to add that personal touch, Franklin Flint and Corbin Moore from our OEM solutions group decided to pack the system in the back of Franklin’s truck and hand deliver it to Bob Shaw at NextIO.

What you have below is a no-expenses-spared documentary of their journey.  Enjoy! 🙂

Extra credit reading:

Pau for now…


Aster’s Big Data Architecture

September 3, 2010

As I mentioned in my last entry, the week before last I headed out to the TDWI World Conference in San Diego.  Besides talking about Dell’s new BI practice, I was there to represent our data analytics partners, Aster Data and Greenplum.  Both vendors also had booths of their own and I was able to grab some time with Jeff Zeisler, director of pre-sales engineers at Aster Data, to get an overview of their architecture.  Here’s what Jeff had to say:

Some of the ground Jeff covers:

  • Aster is a MPP (massively-parallel processing) data warehouse solution.  It runs on a cluster of commodity hardware that execute SQL queries in parallel.
  • The 3 layers to the architecture:
    • Queen tier – central location users use to submit queries. It figures out how to split up the query and send it to the next tier.
    • Worker tier – where most of the servers are located, where data is stored (locally on the servers) and where all the heavy lifting for processing occurs.  The map reduce framework is built into this tier and sits right next to the SQL execution engine.
    • Loader and exporter tier:  a separate tier of machines that can be used to load new data into the system for  bulk loading.
  • How it works: Query gets broken up across all the machines, they each execute some portion of the query and the result are brought back together at the Queen and returned to the user.
  • New cool things coming up in the next 6 months.

Extra:

Pau for now…


Dell has a BI practice?!

August 31, 2010

The week before last I headed out to The Data Warehouse Institute’s  (TDWI) World Conference in San Diego.  I went out to help support our BI team who were using the event as the forum to unveil Dell’s new Business Intelligence practice.

We got a bunch of puzzled looks as people approached the Dell booth and didn’t see any hardware.  Once however they learned what we were there to announce and why, they seemed to buy it (or maybe they just said they got it because they didn’t want to lose out on a chance to win the Dell Mini we were giving away 🙂

BI veteran, Mike Lampa, who has been driving the go-to market effort behind the practice acted as our chief spokesperson.   Here’s the message we were delivering, straight from Mike:

Some of the ground Mike covers:

  • Internally, Dell has one of the top 5 data warehouse implementations in world and we use most of the mainstream ETL, BI and database tools that are out there in the market.
  • The Perot acquisition has given us access to a global services delivery engine and we are marrying this channel with the BI expertise we’ve developed internally.
  • We’ll provide consulting services through our verticals and deliver end to end solutions targeted at vertical markets like Education, Health Care and Financial services.
  • Our goal is to do in services what we did in hardware, be  a disruptive force and bring in higher levels of innovation.

Extra Credit Reading

Pau for now…


Chattin ’bout Chatter, The new new thing from salesforce.com

August 30, 2010

A couple of weeks ago a group from salesforce.com paid a visit to Dell.  Among other things, they came to discuss their new product “Chatter” that Dell has recently launched internally and who’s virtues Michael Dell has tweeted.  Among the salesforce crew was Sean Whiteley, VP of product marketing.  I was able to get some time between meetings with Sean and learn more about Chatter.

Some of the topics Sean tackles:

  • How Chatter has done since its launch on June 22.  What type of traction they’ve seen with customers.
  • How Chatter differs from other internal social media platforms (hint: not only can you follow people; records, objects and information within your business applications have feeds as well, e.g. your notified when a presentation changes or a sales deal you’re following moves to a different stage.)
  • How the idea of Chatter came up. What role chairman Marc Benioff and his use of Facebook played.
  • Currently Chatter is tied closely to CRM but it will be tied to other apps going forward.
  • They believe that many more folks will use Chatter than usesalesforce.com.

Extra Credit reading:

Pau for now…


Sun’s Chief Open Source Officer’s new Gig

August 27, 2010

Last but not least in my series of interviews from last month’s Cloud Summit at OSCON I present to you my conversation with Simon Phipps.  Simon, who until earlier this year was the chief Open Source officer at Sun Microsystems, recently joined the start-up ForgeRock as their chief strategy officer.  Here is what Simon says:

Some of the topics Simon tackles:

  • ForgeRock offers access management and authentication software based on open source code that was developed at Sun.
  • Since the software is open source you can download it for free at ForgeRock.
  • ForgeRock makes its money by selling subscriptions that provide various grades of SLAs.
  • Even though they are 4 mos old, they already have 20 customers including the world’s largest gambling exchange.

Extra credit reading:

Pau for now…


Chief Scientist at BT: “In nature there are no SLAs”

August 16, 2010

J.P. Rangaswamy is British Telecom’s chief scientist and a very interesting fellow.  At the Cloud Summit at OSCON last month he delivered a talk on the future of the cloud.  I was quite intrigued so I grabbed him during the break to learn a bit more about a few of the concepts he presented.

Some of the topics that J.P. tackles:

  • Many of the best utilities we’ve built: Internet, Web, wireless environment etc are built on fundamentally frail best-effort infrastructures.
  • In order to gain predictability you sacrifice a lot of the original value eg. QWERTY
  • He’s not against SLAs, J.P.’s against the throwing away of value under the guise of false beliefs in SLAs.
  • What is the key area in the cloud that needs to be shored up? Interoperability.  Security is overplayed, just look at the development of the Web.
  • Need to concentrate on federation as a mind set — the ability to create services that are daisy chaining select pieces from a variety of each, vs integrated vertical stacks.
  • He’s worried about SLAs because the things people are doing to stop SLAs from being light weight are actually things that prevent interoperability.

Pau for now…


OpenStack insights and code

August 11, 2010

I’m now at the mid-point of the videos I shot at OSCON Cloud Summit a few weeks ago.  Today’s feature is Brett Piatt from the OpenStack ecosystem development team who has been working on the project since it kicked off nine months ago.  Brett’s particular area of focus is the partners who have joined and are participating in the effort.  I got some of Brett’s time after the cloud summit ended and this is what he had to say:

Some of the topics Brett tackles:

  • Over 20 companies participating from hardware makers to software vendors who help you manage or operate OpenStack, e.g. Cloud kick and Rightscale as well as other service providers (who are actually Rackspace competitors.)
  • The Rackspace API and coupling it with feature releases.
  • The projects near term goal which is to get it in production beyond Rackspace and NASA.
  • What code is available now — OpenStack object storage (aka Swift) which powers Rackspace’s cloud files.
  • The Nova code = Rackspace cloud sw + NASA’s Nebula cloud = Cloud and VM orchestration system management package.  It’s mostly written in Python, some C & C++ as well as a dash of Erlang.  It also has built-in ipad, iphone apps, android apps and web control panel — something for the whole family!

Still to come in my OSCON video series:

  • J.P. Rangaswami, Chief Scientist at BT — Nature doesn’t require SLAs
  • Simon Phipps about his new company ForgeRock

Pau for now…


PowerEdge C410x — Whiteboard topology

August 5, 2010

In the last of my GPGPU/PowerEdge C410x trilogy I offer up a whiteboard session with the system’s architect, Joe Sekel.

Some of the topics Joe walks through:

  • How does having remote GPGPUs connected via cable back to a server compare in performance to having the GPGPUs embedded in the server?
  • The topology of the PCI express  x16 (16 lanes per link) plumbing: from the chipset in the host sever through to the GPGPU.
  • The data transfer bandwidth that x16 Gen 2 gives you. 

Extra-credit reading:

Pau for now…


Deep dive tour(s) of the PowerEdge C410x

August 5, 2010

In my last entry I talked about the wild and wacky world of GPGPUs and provided an overview of the PowerEdge C410x expansion chassis that we announced today. For those of you who want to go deeper and see how to set up and install this 3U wonder you’ll want to take a look at the three videos below.

  1. Card installation: How to install/replace a NVIDIA Tesla M1060 GPU card in the PowerEdge C410x taco.
  2. Setting up the system: How to set up the PowerEdge C410x PCIe expansion chassis in a rack, power it up and pull out cards.  Also addresses port numbering.
  3. BMC card mapping: How to map the PCIe cards in the PowerEdge C410x via the BMC web interface.  Also covered are how to monitor power usage, fans and more.

Happy viewing!  (BTW, the C410x’s code name was “titanium” so when you hear Chris refer to it as that don’t be thrown)

Extra-credit reading:

Pau for now…


Say hello to my little friend — packing up to 16 GPGPUs

August 5, 2010

While the name GPGPU, which stands for General-purpose computing on graphics processing units, doesn’t flow lyrically off the tongue, it’s an extremely powerful concept.

What’s the big idea?

The idea behind this sexy five letter acronym is to take a graphical process unit (GPU) and expand its use beyond graphics.  Through the “simple” addition of programmable stages and higher precision arithmetic to the rendering pipelines, the GPU is able to tackle general computing and off load it from the CPU.

So what does this mean and/or why should you care?  Well the connection of GPGPUs to servers bring about ginormous increases in performance helping to make HPC and scaled-out deployments wicked fast.  This works particularly well when you’re talking about modeling, simulation, imaging, signal processing, gaming etc.  Not only can the addition of GPGPUs boost these processes by one or two orders of magnitude but it does so much more cost effectively than by simply adding servers.

What is Dell’s DCS group offering up?

The Data Center Solutions (DCS) team have an Oil & Gas customer that is always looking to push the envelope when it comes to getting the most out of GPGPU’s in order to deliver seismic mapping results faster.  One of the best ways to do this is by increasing the GPU to server ratio.  In the market today, there are a variety of servers that have 1-2 internal GPUs and there is a PCIe expansion chassis that has 4 GPUs.

What we announced today is the PowerEdge C410x PCIe expansion chassis, the first PCIe expansion chassis to connect 1-8 servers to 1-16 GPUs.  This chassis enables massive parallel calculations separate from the server, adding up to 16.48 teraflops of computational power to a datacenter.

But enough of my typing, see for yourself in the overview/walk-thru below starring DCS’s very own Joe Sekel, the architect behind the C410x.

Extra-credit reading

Pau for now…


5 lessons from the Cloud about Efficient Environments

August 2, 2010

The week before last our team decided to divide and conquer to cover two simultaneous events.  Half of us headed to Portland, Oregon for  OSCON and the other half stayed here in Austin to participate in hostingcon.

The Hostingcon keynote

Among those participating in hostingcon was my boss Andy Rhodes who gave the keynote on Tuesday.  Here’s are the slides Andy delivered:

(If the presentation doesn’t appear above, click here to view it.)

The idea of the keynote was to share with hosters the five major lessons we have learned over the last several years working with a unique set of customers operating at hyperscale.  Those five lessons are:

  1. TCO models are not one size fits all.  Build a unique model that represents your specific environment and make sure that you get every dollar of cost in there.  Additionally, make sure that your model is flexible enough to accommodate new info and market changes.
  2. Don’t let the status quo hold you back.  Not adapting soon enough and delays in rolling out solutions can cost you dearly.
  3. The most expensive server/storage node is the one that isn’t used (sits idle for 6-12 weeks) or the one you don’t have when you need it most.
  4. Don’t let Bad Code dictate your hardware architecture.
  5. Don’t waste time on “Cloud Washing.”  Talk to your customers about real pain points and how to solve them.

The WHIR’s take

The WHIR did a good write up of the keynote, here is the concluding paragraph:

So, it seems that cloud best practices will help companies reduce their physical infrastructure, which seems to be a bit counter-intuitive, given that Rhodes is representing a hardware provider. But it makes sense. Given the never-ending list of projects for IT staff, and as they drive down costs, their business will grow, and they’ll be able to increase their IT spend for innovative efforts. “What we’re hoping to do is let you do more with less.”

Extra-credit reading:

Pau for now…


Customer reviews our 4-servers-in-one, the C6100

August 1, 2010

Outbrain is a company that provides content recommendation solutions for blogs and publishers.  Among their customers are such venerable names as USA today, Chicago Tribune, Slate and VentureBeat.

Data Center number three

The company recently decided to set-up a third data center and went out looking for what type of kit they wanted to outfit it with.  Much to our joy they decided on the Dell PowerEdge C6100.  Although they are currently waiting on delivery of the systems, Outbrain operations engineer Nathan Milford, has been playing with a demo unit for several weeks.

Earlier this week Nathan posted his initial thoughts, along with pictures and diagrams, on the C6100.  The post is appropriately entitled: Some Notes on Dell’s C6100 Multi-Node Server Chassis

In his post Nathan talks about:

  • Who else they looked at and why they went with Dell
  • The basic layout of the C6100
  • The “unscientific” testing, research and math he did on power draw on an individual node.
  • How intends to deal with some of quirks and infrastructure changes the C6100s will cause.

My favorite quote from the post is:

SuperMicro, SGI, HP all have similar devices, but the thing they don’t have is DCS, which is more or less independent of Dell and can be agile like a smaller vendor, but with Dell’s backing and resources.

You’ll want to check back on Nathan’s blog as he plans to add to his notes after the servers are installed.

Extra-credit reading:

Pau for now…


Ubuntu, the Cloud and the Future — Neil Levine

July 27, 2010

After the cloud summit last week at OSCON, I sat down with Neil Levine of Canonical to see what was in store for Ubuntu cloud-wise (Canonical is a partner of ours in our cloud ISV program).  Neil is the VP of Canonical’s corporate services division which handles their cloud and server products.

Here’s what Neil had to say:

Some of the topics Neil tackles:

  • The next Ubuntu release “Maverick Meerkat” and its geek-a-licious launch date: 10.10.10.
  • Look for Maverick to make Eucalyptus even easier to deploy and use.
  • Data processing and data analytics is one of the key use cases in the cloud and Canonical is looking to move up the stack and provide deep integration for other apps like Hadoop and NoSQL.
  • What are some of the areas of focus for next year’s two releases i.e. 11.04 and 11.10.
  • Project ensemble: what it is and what its goals are.

Extra-credit reading

Pau for now…


My quick spiel on the cloud

July 25, 2010

At OSCON last week I ran into a compadre from a previous life, Fred Kohout.  Fred is now the CMO at UC4, a pure play software automation company, and he, like I, was in Portland to attend OSCON and the Cloud Summit.

At the summit Fred did to me what I’ve done to so many others, he got me on the receiving end of a video camera to talk about where Dell plays in the cloud and how we see the cloud evolving.

You can check out Fred’s blog from the Summit where he posted my video as well as the interview he did with another former compadre, Peder Ulander, CMO at cloud.com.

Don’t touch that dial

If you’re interested in OSCON be sure to stay tuned.  I’ve got four more interviews from the event that I will be posting soon.

Pau for now…


The OpenStack design summit in review

July 22, 2010

Tuesday after the OSCON cloud summit I sat down with Rick Clark over a well deserved beer.  Rick is the chief architect and project lead for the OpenStack compute project that was announced on Monday.

Last week I interviewed Rick on the first day of the inaugural OpenStack design summit and I wanted to catch up with him and get his thoughts on how it had gone.  This is what he had to say:

Some of the topics Rick tackles:

  • How it went engaging a very large technical group (100+) in an open design discussion patterned after an Ubuntu Developer Summit.
  • Some of the decisions he thought would be no brainers, turned out differently e.g. OVF (open virtualization format) and keeping the storage and compute groups separated.
  • Since the summit involved representatives from over 20 companies, some of them competitors, how good were people at putting away their business biases/agendas?
  • How far they got (hint they got requirements from everyone for the first release).
  • They’ve already gotten their first code contributions.
  • How they plan to build a community: actively looking to hire a community manager.   In the meantime its actively growing and in a week they’ve gone from 10 people in the IRC channel to 150 on Tuesday.

Extra-credit reading:

But wait there’s more…

I got back from OSCON last night with a fist full of videos.  In addition to the above, coming soon to a browser near you are the following interviews:

  • Brett Piatt with more OpenStack goodness
  • J.P. Rangaswami, Chief Scientist at BT — Nature doesn’t require SLAs
  • Simon Phipps about his new company ForgeRock
  • Neil Levine, VP at Canonical about what’s in store for Ubuntu.

Pau for now…


OpenStack Compute – talking to the chief architect

July 18, 2010

Rick Clark used to be the engineering manager at Canonical for Ubuntu server and security as well as lead on their virtualization for their cloud efforts.  He’s now at Rackspace and is applying much of what he learned while at Canonical to his new gig as project lead and chief architect of the just announced OpenStack Compute.

Rick talked to me about what he brought with him from Canonical as well as the details behind OpenStack Compute.

Some of the topics Rick tackles:

  • What is the OpenStack Compute project (hint its a fully open sourced IaaS project)
  • Leveraging what Rick learned from the Ubuntu community, including a regular six month cadence.
  • Rick’s goals for design summit: develop a roadmap for the first release, spec out the software and spend the last two days prototyping and hacking.
  • Why they went with the Apache 2 license and why not AGPL?
  • The Rackspace API (NASA had already started to switch from the Amazon API before combing
  • The project’s core principles: open, open, open

Extra-credit reading:

Pau for now…