Savtira streams media and apps from the cloud with beefy PowerEdge C combo

April 18, 2011

Savtira Corporation, who provides outsourced Cloud Commerce solutions, has chosen Dell DCS’s PowerEdge C line of servers and solutions to deliver streamed media and apps from the cloud.  Dell’s gear will help power the Savtira Cloud Commerce platform and Entertainment Distribution Network (EDN).

With a little help from PowerEdge C, businesses will now be able to use EDN to stream all digital media (business apps, games, music, movies audio/ebooks) from the cloud to any device.  One of the particularly cool features is, since the state and configuration are cloud-based, consumers can switch between devices and pick up exactly where they pushed pause on the last device.

Talk about supercharging

To power Savtira’s EDN data center, the company picked PowerEdge C410xs packed with NVidia Tesla M2070 GPUs and driven by PowerEdge C6145s.  If you think GPUs are just for rendering first-person shooters, think again.  GPUs can also cost-effectively supercharge your compute-intensive solution by offloading a lot of the processing from the main CPUs.  According to NVidia, for 1/10 the cost and with only 1/20 of the power consumption, GPUs deliver the same performance as CPUs.

To  help you get an idea of the muscle behind this solution, the PowerEdge C410x PCIe expansion chassis holds up to 16 of the Tesla M2070s GPUs, each of which exceeds over 400 cores.  Two fully populated C410xs are in turn powered by one PowerEdge C6145 for a combined total of 33 Teraflops in just 7U.

Talk about a lot of power in a little space 🙂

Extra-credit reading

  • PowerEdge C6145 — Dell DCS unveils its 4th HPC offering in 12 months, and its a beefy one
  • PowerEdge C410x — Say hello to my little friend — packing up to 16 GPGPUs
  • NVIDIA: from gaming graphics to High Performance Computing

Pau for now…

Live from World Hosting Days – AMD’s John Freuhe talks about the AMD based PowerEdge C systems

March 23, 2011

This week, outside of Frankfurt, WorldHostingDays is taking place.  A whole delegation of folks from the Data Center Solutions group is there to support the announcement of our new microserver line.   A lot of our key partners are there as well.  One such partner is AMD.

Earlier today, AMD director of product marketing John Fruehe held a session entitled “Core Scalability in a cloud environment.”  Above is a three minute section where John talks about the three AMD-based systems that are part of the PowerEdge C line:

  • The PowerEdge C5125 microserver which we announced yesterday
  • The PowerEdge C6105 optimized for performance per watt per dollar.
  • The PowerEdge C6145 our HPC monster machine

Take a listen as John walks you through the products and their use cases.

Extra-credit reading

Pau for now…

Now Available, HPC monster machine

March 1, 2011

A couple of weeks ago, we announced the PowerEdge C6145 system made up of two servers crammed into a 2U enclosure with a total of 96 cores.  Today that system officially became available for purchase.

Rave reviews

This system got a great review in  CRN yesterday entitled “Performance Of Dell’s PowerEdge C6145 Rack Server Off The Charts.”  To give you a taste, here is how the article begins:

Dell (NSDQ:Dell) has really outdone itself. On Tuesday, the company begins shipping a machine that the CRN Test Center can only describe as 2010 Server of the Year squared.

Officially called the PowerEdge C6145, Dell’s latest monster server more than doubled the Geekbench score of the reigning champ, the Dell R815.

This is from the article that did:

Talk about dense. Dell’s new PowerEdge C6145 server stuffs eight AMD Opteron processors in a single 2U enclosure, making it a standout for high-performance computing (HPC) and, potentially, virtualization…By way of comparison, Dell called out Hewlett-Packard’s eight-way ProLiant DL 980 G7, which has 8U and takes up four times as much space as the Dell box. This is especially important in HPC environments, which, in their scope, tend to put a premium on footprint.

And the Register had this to say

This will be extremely useful for companies that want to attach lots of storage or networking to server nodes in dense configurations, or those who want to cram in a lot of cores into a box and lash them to lots of external GPU co-processors.

The C6145 is Dell Data Center Solutions group’s fourth HPC system in 12 months.  Looks like we’re picking up some momentum 🙂

Extra-credit reading

Pau for now…

And on the other end of the spectrum — Microservers

February 16, 2011

Monday I wrote about the announcement of our mega-beefy, 96-core PowerEdge C6145 server, specifically geared to customers solving big problems involving huge and complex data sets in mapping, visualization, simulations and rendering.

At the other end of the spectrum however are customers, such as those offering low-end dedicated hosting solutions, who are looking for systems with only enough processing and storage to serve up straight-forward, focused applications such as those for serving up webpages, streaming video etc.  These “right-sized” systems are referred to as “micro” or “light weight” servers.

Take a listen to Data Center Solutions marketing director Drew Schulke below as he explains the origin of the microserver and walks you through our second generation offering in this space.

Some of the area Drew covers:

  • How did Dell get into the microserver market 2-3 years ago
  • How the progression of Moore’s law caused processing power to out strip the needs of many applications.
  • A walk through of our second generation microserver which packs 12 single socket servers into one 3Uenclosure.

We will continue to be making noise in this space.  Be sure to tune in next time as our topic will be a mini “case study” on Dell’s first generation microserver deployed at a large hoster in France.

Extra-credit reading:

Pau for now…

Dell DCS unveils its 4th HPC offering in 12 months, and its a beefy one

February 14, 2011

Today Dell Data Center Solutions (DCS) is announcing the PowerEdge C6145, number four in our line of offerings targeted specifically at High Performance Computing.  This AMD-based system, which contains two four-socket servers for a total of 96 cores, ranked as the highest performing x86 2U shared infrastructure server on the market based on SPECfp_rate2006 results. In addition, the PowerEdge C6145 can deliver up to a 534% better price performance at 1/5 the cost and 1/4 of the rack space when compared to HP’s ProLiant DL980 G71.

The HPC beat goes on

When we in DCS launched our PowerEdge C line almost a year ago, our first HPC-focused machine was the Intel-based C6100.   We followed it three months later with our C410x expansion chassis to supercharge it and then, three months after that, we came out with the AMD version of the C6100, the PowerEdge C6105.   Now three months after that system debuted we are unveiling the C6145.  All three servers come in the same 2U package but with differing chips and architectures targeted at different HPC application types.

Check out the video below and let the C6145 architect, John Stuewe take you on a quick tour of this new muscle machine.

Hairy problem solver

The PowerEdge C6145 with its 755FLOPS and up to 1T of memory is specifically geared to solving big problems involving huge and complex data sets in mapping, visualization, simulations and rendering, and solving them faster.  With regards to efficiency, the shared infrastructure design of the system can reduce the number of individual fans by 1/4 compared to traditional 2U systems with less power needed to cool and resulting in higher performance per watt, per dollar.

Super charge it

As if 96 cores packed into 2U wasn’t powerful enough, you can take your workloads “to 11” with the help of the PowerEdge C410x.  The C410x PCIe expansion chassis allows you to double the server to graphics processing unit (GPU) ratio to 1:8 to create a number-crunching uber powerhouse.

Dell DCS has been listening to their HPC customers and rolling out systems to meet their needs, today we’ve announced the latest in our line up, the PowerEdge C6145.

Extra-Credit reading:

Pau for now…

1 Based on testing by Dell Labs. Dell PowerEdge C6145: SPECfp_rate2006 of 1310 in 2U as compared to HP ProLiant DL980 G7: SPECfp_rate2006 of 1080 in 8U.  SPEC® and the benchmark name SPECfp® are registered trademarks of the Standard Performance Evaluation Corporation.  Competitive benchmarks stated above reflect results published or submitted to as of Feb 14, 2011.  The comparison presented above is based on the best performing 8-chip x86 servers.  For the latest SPECfp_rate2006 benchmark results, visit Actual performance will vary based on configuration, usage and manufacturing variability.

El Reg gives DCS props for HPC innovation

October 5, 2010

The week before last a crew from Dell was out at NVIDIA’s GPU tech conference, showing our latest and greatest offerings in the HPC space.  It looks like our PowerEdge C410x expansion chassis system caught the eye of the Register HPC blog writer Dan Olds.

Below are some excerpts from Dan’s article, “Dell gets busy with GPUs” followed by the video he shot.   I love the video theme music and the fact that its a “BPV (Bad Production Values) presentation.”  [BTW We’ll have to give Dan the full Data Center Solutions(DCS) rundown at some point so that he can see that when it comes to design and innovation, the C410x is not an outlier 🙂 ]

From Dan’s Article:

Okay, let’s put it on the table: when the conversation turns to cutting-edge x86 server design and innovation, the name “Dell” doesn’t come up all that often. Their reputation was made on delivering decent products quickly at a low cost. I see that opinion in all of our x86 customer-based research – it’s even something that Dell employees will cop to.

That said, two of the most innovative and cutting-edge designs on the GPU Tech Conference show floor were sitting in the Dell booth, and that’s the topic of this video blog….

It’s the second product that really captured my interest. Their PowerEdge C410x is a 3U PCIe expansion chassis that can hold up to 16 PCIe devices and connect up to eight servers with Gen2 x16 PCIe cables. Customers can use it to host NVIDIA Fermi GPU cards, SSDs, Infiniband, or any other PCIe device their heart desires. What made my motor run was the possibility of cramming it full of Fermi cards and then using it as an enterprise shared device – NAC: Network Attached Compute.

…Dell deserves kudos for putting out this box. It’s a step ahead of what HP and IBM are currently offering, and it moves the ball forward toward an NAC future.

Extra credit reading:

Pau for now…

NVIDIA: from gaming graphics to High Performance Computing

September 22, 2010

A few weeks ago a group from NVIDIA was out visiting Dell.   Their Tesla series of GPU cards are the primary cards that are used in our newly announced C410X expansion chassis.  Filling up the C410X with NVIDIA cards and attaching it to a server can bring about ginormous increases in compute performance, helping to make HPC and scaled-out deployments wicked fast.

So how did NVIDIA get from rendering graphics for first person shooters to creating GPUs that accelerate modeling, simulation, imaging, signal processing,  etc?  Listen to the interview below with Geoff Ballew of NVIDIA’s Tesla unit and learn. 🙂

Some of the ground Geoff covers:

  • NVIDIA’s not just for gaming any more
  • A few years ago found that their graphic chips were getting a lot of raw math horsepower, so they added a few extra features into the chips and built a suite of software so that the graphic cards could be used for general computation.
  • How hard was it to convince HPC customers to take NVIDIA seriously in the compute arena?
  • What kind of performance gains are they seeing?
  • The accompanying software development tools and ecosystem of partners
  • The shift in NVIDIA’s workforce and culture as they’ve gotten into general compute processing – united by their love for GPUs 🙂

Extra-credit reading:

Pau for now…

Welcome the uber-dense AMD-based Cloud/ HPC machine

September 13, 2010

The last couple of Dell Data Center Solutions offerings I’ve talked about, Viking and MDC, have been from the custom side of the house.  Both of these solutions are targeted specifically at a few select large customers.

The subject of today’s post however, the PowerEdge C6105 server, is available to anyone running a scaled out environment.  It, alongside the recently available C410X expansion chassis, represent the latest additions to the PowerEdge C line that we launched back in March.

Efficiency is its middle name

Designed to maximize performance per watt per dollar, the C6105 is ideal for energy and budget constrained scale-out environments.  Targets include: Scale-out Web 2.0, hosting, and HPC applications where core count and power efficiency are the priority.

Want a closer look? Click below and product manager Steve Croce will give you a quick overview.

Some of the points Steve touches on:

  • The 6105 is very dense: essentially four servers in a 2U chassis
  • The system leverages “shared infrastructure,” e.g two power supplies for all four servers,  four 2U fans to cool it, etc., which results in weight and power savings and allows for an extremely dense system.
  • The 6105 features the Opteron 4000 series which are focused on power efficiency
  • It holds 12 3.5 inch disks.  Each server gets 3 disks.

Extra-credit reading:

Pau for now…

Schlepping a 410x across Austin – A documentary

September 9, 2010

Last month we introduced the PowerEdge C410X expansion chassis which, when populated with GPGPUs and attached to a server, brings about ginormous increases in performance in a very cost effective manner.

A couple weeks after the system debuted NextIO, who creates and sells virtualized IO capabilities, was looking to qualify the machine in their lab located here in Austin.   Wanting to add that personal touch, Franklin Flint and Corbin Moore from our OEM solutions group decided to pack the system in the back of Franklin’s truck and hand deliver it to Bob Shaw at NextIO.

What you have below is a no-expenses-spared documentary of their journey.  Enjoy! 🙂

Extra credit reading:

Pau for now…

PowerEdge C410x — Whiteboard topology

August 5, 2010

In the last of my GPGPU/PowerEdge C410x trilogy I offer up a whiteboard session with the system’s architect, Joe Sekel.

Some of the topics Joe walks through:

  • How does having remote GPGPUs connected via cable back to a server compare in performance to having the GPGPUs embedded in the server?
  • The topology of the PCI express  x16 (16 lanes per link) plumbing: from the chipset in the host sever through to the GPGPU.
  • The data transfer bandwidth that x16 Gen 2 gives you. 

Extra-credit reading:

Pau for now…

Deep dive tour(s) of the PowerEdge C410x

August 5, 2010

In my last entry I talked about the wild and wacky world of GPGPUs and provided an overview of the PowerEdge C410x expansion chassis that we announced today. For those of you who want to go deeper and see how to set up and install this 3U wonder you’ll want to take a look at the three videos below.

  1. Card installation: How to install/replace a NVIDIA Tesla M1060 GPU card in the PowerEdge C410x taco.
  2. Setting up the system: How to set up the PowerEdge C410x PCIe expansion chassis in a rack, power it up and pull out cards.  Also addresses port numbering.
  3. BMC card mapping: How to map the PCIe cards in the PowerEdge C410x via the BMC web interface.  Also covered are how to monitor power usage, fans and more.

Happy viewing!  (BTW, the C410x’s code name was “titanium” so when you hear Chris refer to it as that don’t be thrown)

Extra-credit reading:

Pau for now…

Say hello to my little friend — packing up to 16 GPGPUs

August 5, 2010

While the name GPGPU, which stands for General-purpose computing on graphics processing units, doesn’t flow lyrically off the tongue, it’s an extremely powerful concept.

What’s the big idea?

The idea behind this sexy five letter acronym is to take a graphical process unit (GPU) and expand its use beyond graphics.  Through the “simple” addition of programmable stages and higher precision arithmetic to the rendering pipelines, the GPU is able to tackle general computing and off load it from the CPU.

So what does this mean and/or why should you care?  Well the connection of GPGPUs to servers bring about ginormous increases in performance helping to make HPC and scaled-out deployments wicked fast.  This works particularly well when you’re talking about modeling, simulation, imaging, signal processing, gaming etc.  Not only can the addition of GPGPUs boost these processes by one or two orders of magnitude but it does so much more cost effectively than by simply adding servers.

What is Dell’s DCS group offering up?

The Data Center Solutions (DCS) team have an Oil & Gas customer that is always looking to push the envelope when it comes to getting the most out of GPGPU’s in order to deliver seismic mapping results faster.  One of the best ways to do this is by increasing the GPU to server ratio.  In the market today, there are a variety of servers that have 1-2 internal GPUs and there is a PCIe expansion chassis that has 4 GPUs.

What we announced today is the PowerEdge C410x PCIe expansion chassis, the first PCIe expansion chassis to connect 1-8 servers to 1-16 GPUs.  This chassis enables massive parallel calculations separate from the server, adding up to 16.48 teraflops of computational power to a datacenter.

But enough of my typing, see for yourself in the overview/walk-thru below starring DCS’s very own Joe Sekel, the architect behind the C410x.

Extra-credit reading

Pau for now…

PowerEdge C6100 – HPC & Cloud machine

April 8, 2010

As a follow on to last week’s PowerEdge C line overview, here is the first individual system overview:  the C6100.   Click below and let Dell Solutions Architect Rafael Zamora guide your thru the design and features of this densely packed machine targeted at HPC and cloud workloads.

Some of the highlights:

  • The PowerEdgeC 6100 holds the equivalent of 4 systems which have been packaged into “sleds,” each containing boards, RAM and microprocessors.
  • Upfront you can put a ton o’ disk drives, either 24 x 2.5″ drives or 12 x 3.5″ drives.
  • Great for markets like HPC clustering and search engines where compute density is key.  (This is not intended for running general purpose apps like Exchange, SQL or Oracle).
  • It will serve as the compute node in the Ubuntu Enterprise Cloud solution from our partner Canonical.

Still to come, overviews of the C2100 and C1100.

Extra-Credit Reading:

Pau for now…

%d bloggers like this: