Structure: OpenStack launches Incubation program

June 24, 2011

Today was second day of the two-day Structure conference here in San Francisco.  Cloud was the topic du jours with heavy referencing of big data and concepts and projects such as OpenFlow, Open Compute and OpenStack.  The format consisted mainly of moderated panels seated in comfy chairs with break out sessions scheduled a couple of times during the day.

While some of the panels and speakers were quite enlightening, I find the true benefit of a show like Structure comes from the networking and hallway conversations that occur.  One such conversation was one I had with Jonathan Bryce of Rackspace about the incubation program they have just launched for OpenStack.

Some of the ground Jonathan covers:

  • Dealing with the question of how to expand OpenStack and include new projects
  • The initial three core projects: Compute, Object Storage and Image Service
  • The first two projects that have been approved for incubation: a dashboard and “keystone”

Stay tuned for more interviews from Structure 11.

Pau for now…


A walk thru Facebook’s HQ on Open Compute day

April 12, 2011

Last Thursday a group of us from Dell attended and participated in the unveiling of Facebook’s Open Compute project.

Much the way open source software shares the code behind the software, the Open Compute project has been created to provide the specifications behind the servers and the data center.    By releasing these specs, Facebook is looking to promote the sharing of data center and server technology best practices across the industry.

Pre-Event

The unassuming entrance to Facebook's Palo Alto headquarters.

The Facebook wall.

Facebook headquarters at 8am. (nice monitors! 🙂

Words of wisdom on the wall.

The Event

Founder and CEO Mark Zuckerburg kicks off the Open Compute event.

The panel moderated by Om Malik that closed the event. Left to right: Om, Graham Weston of Rackspace, Frank Frankovsky of Facebook, Michael Locatis of the DOE, Alan Leinwand of Zynga, Forrest Norrod of Dell (with the mic) and Jason Waxman of Intel.

Post-event show & tell: Drew Schulke of Dell's DCS team being interviewed for the nightly news and showing off a Dell DCS server that incorporates elements of Open Compute.

Extra credit reading

  • GigaOM: Bringing Facebook’s Open Compute Project Down to Earth
  • The Register:  Facebook’s open hardware: Does it compute?

Pau for now…


Rackspace’s head of OpenStack talks about Facebook’s Open Compute

April 7, 2011

This morning at Facebook’s headquarters in Palo Alto the company announced their Open Compute project  Partners and kindred spirits were there to tell the story behind Open Compute and explain what they think it means to the industry.  One group of kindred spirits were the individuals from Rackspace.  I got some time with Jim Curry who heads up OpenStack at Rackspace after the event officially ended.

Here is what Jim had to say:

Some of the topics Jim covers:

  • Driving efficiencies in data center design requires looking at the issue holistically.
  • Learning from Facebook’s successes and failures.
  • Looking forward to collaboration in an area that hasn’t historically had a lot of collaboration.
  • Engagement with Facebook engineers on how to run OpenStack on their hardware.

Extra-credit reading

Pau for now…


Frank Frankovsky of Facebook talks about Open Compute — how they got there and where they go from here

April 7, 2011

Former Dell DCS dude Frank Frankovsky has been at Facebook for about 18 months.  Frank is Facebook’s Director, Hardware Design and Supply Chain and since he arrived, he has been heavily involved in the Open Compute project.  Today was the big day when Open Compute made its worldwide debut.

Frank represented Facebook on the panel discussion which was moderated by GigaOM’s Om Malik.  After the panel I was able to grab a few minutes with Frank, between press interviews, and learn first hand about the project.

Some of the topics Frank covers:

  • What he and his team do at Facebook
  • Their brand new data center which is running open compute infrastructure
  • Opening up the details and specs of their data center and the systems they are running
  • The genesis of the open compute project
  • What are the next steps for the open compute project

Extra-credit reading:

Pau for now…


Facebook, OpenCompute and Dell

April 7, 2011

Today at its headquarters in Palo Alto, Facebook and a collection of partners such as Dell, Intel and AMD  — as well as kindred spirits like RackSpace’s founder (the company behind OpenStack) and the CIO of the Department of Energy — are on hand to reveal the details behind Facebook’s first custom-built data center and to announce the Open Compute project.

Efficiency: saving energy and cost

The big message behind Facebook’s new data center, located in Prineville Oregon, is one of efficiency and openness.  The facility will use servers and technology that deliver a 38 percent gain ìn energy efficiency.  To bring the knowledge that the company and its partners have gained in constructing this hyper-efficient hyper-scale data center Facebook is announcing the Open Compute project.

Much the way open source software shares the code behind the software, the Open Compute project has been created to provide the specifications behind the hardware.  As a result, Facebook will be publishing the specs for the technology used in their data center’s servers, power supplies, racks, battery backup systems and building design.  By releasing these specs, Facebook is looking to promote the sharing of data center and server technology best practices across the industry.

How does Dell fit in?

Dell, which has a long relationship with Facebook, has been collaborating on the Open Compute project.  Dell’s Data Center Solutions group has designed and built a data center solution using components from the Open Compute project and the server portion of that solution will be on display today at Facebook’s event.  Additionally Forrest Norrod, Dell’s GM of server platforms will be a member of the panel at the event talking about the two companies’ common goal of designing the next generation of hyper efficient data centers.

A bit of history

Dell first started working with Facebook back in 2008 when they had a “mere” 62 million active users.  At that time the three primary areas of focus in with regards to the Facebook IT infrastructure were:

  1. Decreasing power usage
  2. Creating purpose-built servers to match Facebook’s tiered infrastructure needs
  3. Having tier 1 dedicated engineering resources to meet custom product and service needs

Over the last three-plus years, as Facebook has grown to over 500 million active users, Dell has spefically helped out to address these challenges by:

  • Building custom solutions to meet Facebook’s evolving needs, from custom-designed servers for their web cache, to memcache systems to systems supporting their database tiers.
  • Delivering these unique servers quickly and cost effectively via Dell’s global supply chain.  Our motto is “arrive and live in five”, so within five hours of the racks of servers arriving at the dock doors, they’re live and helping to support Facebook’s 500 million users.
  • Achieving the greatest performance with the highest possible efficiency. Within one year, as the result of Dell’s turnkey rack integration and deployment services, we were able to save Facebook 84,000 pounds of corrugated cardboard and 39,000 pounds of polystyrene during that same year.

Congratulations Facebook! And thank you for focusing on both open sharing and on energy efficiency from the very beginning!

Pau for now…


%d bloggers like this: