Today the OpenStackdesign summit wrapped up down in San Antonio. The summit featured close to 300 attendees representing 90 different companies. One of the key partners since the project kicked off back in July has been Citrix. On Wednesday I caught up with Gordon Mangione, Vice President of cloud at Citrix to get his thoughts on the project and this week’s summit. Here’s his enthusiastic response:
Continuing in my series of videos from the OpenStack design summit this week in San Antonio, here is an interview I did yesterday with Eucalyptus systems co-founder Graziano Obertelli.
Eucalyptus allows enterprises to set up open source infrastructure-as-a-service private clouds. Eucalyptus is also one of the key ingredients in the Ubuntu Enterprise Cloud that is being certified to run on Dell’s PowerEdge C systems as part of our cloud ISV program.
Here is what Graziano had to say:
Some of the ground Graziano covers:
What goals do the Eucalyptus team have for the summit
They’ve recently hired a community manager – Mark Atwood
Yesterday, near the end of day two of the OpenStack design summit, I caught up with Rick Clark, chief architect of the OpenStack platform. I wanted to get Rick’s thought’s on how the four-month old open source cloud computing project and the summit were going.
Here’s what he had to say:
Some of the ground Rick covers:
The goal of the summit as well as the goal of the next two releases.
Another of yesterday’s featured speakers at the OpenStack design summit was Accenture partner, Joe Tobolski. Joe is part of Accenture labs which looks at emerging technologies and he is responsible for assets and architecture as part of Accenture’s global cloud program.
I sat down with Joe in the cafe downstairs and got his thoughts on why OpenStack would be attractive to enterprises as well as how the Accenture team was participating in the summit.
One of the featured speakers during the kick off of the OpenStack design summit yesterday was NASA CTO of IT, Chris Kemp. OpenStack is an open source cloud platform and the compute side of the project is based on code from NASA’s Nebula cloud.
I got some time with Chris and learned about NASA’s involvement in the project:
Some of the ground Chris covers:
Nebula and the cloud computing platform code base
NASA’s huge data needs and what they do with the data
Serendipity: NASA’s cloud engine + Rackspace’s file system engine
How NASA is working with the project: a two-way street
Yesterday morning I made the drive down to San Antonio for OpenStack’s second design summit (and first open to the public). If you’re not familiar with OpenStack, its an open source cloud platform founded on contributed code from Rackspace and NASA’s Nebula cloud. The project was kicked off back in July at an inaugural design summit held in Austin.
The project has picked up quite a bit of momentum in its first four months. Attending this week’s 4-day conference are close to 300 people, representing 90 companies, from 12 countries. The event is broken into a business track and design track (where actual design decisions are being made and code is being written).
Powering the Install Fest
For the project Dell has sent down a bunch of PowerEdge C servers which have been set-up upstairs on the 5th floor. OpenStack compute has been installed on the two racks of servers and are up and running. Tomorrow, coders will get access to these systems during the install fest. During the fest attendees will each be given a virtual machine on the cloud to test and learn about installing and deploying OpenStack to the cloud.
I got Bret Piatt, who handles Technical Alliances for OpenStack, to take me on a quick tour of the set-up. Check it out:
Featuring: Brett Piatt, PowerEdge C1100, C2100, C6100 and C6105
Ironically when Dell, the company that built its success around supply chain management excellence, started the Data Center Solutions group to serve the “biggest of the big,” supply chain and procurement were just bit players.
I recently sat down with Chris Thompson who heads up the DCS supply chain and procurement organization and learned how this changed and the importance of his group in meeting the needs of this very unique customer set.
Some of the topics Chris tackles:
How his group helps customers get to revenue faster.
To what extent the DCS supply chain org is independent of Dell “normal” processes and procedures and to what extent it leverages them.
How has Chris’ group affected traditional Dell supply chain practices.
Last week I attended and presented at Dell’s analyst summit, “Dell Services and Solutions for the Virtual Era.” Besides sharing Dell’s cloud strategy with analysts I also captured their thoughts on the event.
Here is an interview I did on the first day with Redmonk analyst and founder, Stephen O’Grady.
The questions Stephen addresses:
[0:10] Based on the talks given by Dell execs and customers, what are Stephen’s key take aways?
[0:58] As the follow on to the press and analyst event in San Francisco in March, to what extent has Dell delivered on the promises we made at that time and to what extent might we have fallen short?
[1:50] What progress would Stephen hope to see at another event six months from now?
[2:30] Bonus question: Dell talks about making the transformation into a solutions company, to what extent is Stephen seeing this happening?
As I mentioned in my previous entry, last week in Boston we held the “Dell Services and Solutions for the Virtual Era” analyst summit. During the first day of this two-day event we interviewed a few analysts to get their take on the message Dell was delivering.
Earlier this week Dell held an industry analyst summit in Boston. The event, “Dell Services and Solutions for the Virtual Era” was attended by analysts from around the world and was a follow-on to the event Dell held in San Francisco back in March.
Please take your seats, the summit is about to begin.
What went on
The two-day event featured presentations from Dell’s senior leadership, customer and partner panels, break out sessions and 1:1’s between analysts and Dell subject matter experts. The first day also culminated with a solutions expo and dinner held at the very cool Institute of Contemporary Art.
What were the key messages?
The high-level messages that Dell kept reiterating were:
We are executing on our strategy of delivering solutions that are open, capable and affordable which ultimately give our customers the power to do more.
We are undergoing a fundamental change in the way we’re approaching our customers. We are moving away from transactional selling motions toward a more consultative approach.
Right before the guests arrive. The solutions expo and dinner.
How was it received?
It will be interesting to see the reports that are generated from this week’s summit but we did receive some very positive tweets during the event (check out the whole twitter feed from the event):
Conclusion from [Dell Analyst Summit]: Dell 2.0 has arrived. We’ve called it 1.x to date. No longer. They’ve cracked the “solutions” code — Jonathan Eunice, Illuminata
Dell’s vision is quite clear and lingo shows detachment from manufacturer approach – openness remains as a mantra — Giorgio Nebuloni, IDC
Dell talking solution accelerators. Didn’t hear this message from them a few yrs ago. Highlights strengths of Perot & Dell 2gether — Tim Sheedy, Forrester
Great session with Dell Health IT team. Great progress and compelling positioning in the space. Good prog telling a sngl story — Crawford Del Prete, IDC
Updated 11/04
Here are interviews I did with two of the analysts who attended the summit:
In my last entry I talked about how Steve Perlman, CEO and founder of OnLive joined the recent press round table we had in New York. OnLive is a cloud-based gaming company that launched earlier this year and whose servers were custom built by Dell’s Data Center Solution (DCS) group.
To give you a bit more insight into how the two companies worked together, here is a short video with Bruce Grove, OnLive’s director of strategic relations talking about the relationship between Dell and OnLive.
Some of the ground Bruce covers:
The value, as a start up, of working with someone who knows how to do supply chain, logistics and build tons of servers.
Working together as a team to design the servers (engineering teams on both sides as well as manufacturing teams).
Timothy Prickett Morgan of everyone’s favorite vulture-branded media site The Register attended a round table discussion we held a few weeks ago in New York. His piece from that event, which was focused around the cloud, was posted yesterday.
You should check out the whole article but here are some snippets to whet your appetite:
What DCS is all about
For the past several years – and some of them not particularly good ones – Dell’s Data Center Services (DCS) bespoke iron-making forge down in Round Rock, Texas, has been a particularly bright spot in the company’s enterprise business.
The unit has several hundred employees, who craft and build custom server kit for these picky Webby shops, where power and cooling issues actually matter more than raw performance. The high availability features necessary to keep applications running are in the software, so you can rip enterprise-class server features out of the boxes – they are like legs on a snake.
How we’re working with web-based gaming company OnLive
“These guys took a bet on Facebook early, and they benefited from that,” says Perlman [OnLive Founder and CEO]. “And now they are making a bet on us.”
OnLive allows gamers to play popular video games on their PCs remotely through a Web browser and soon on their TVs with a special (and cheap) HDMI and network adapter. The games are actually running back in OnLive’s data centers, and the secret sauce that Perlman has been working on to make console games work over the Internet and inside of a Web browser is what he called “error concealment”.
DCS had to create a custom server to integrate their video compression board into the machine, as well as pack in some high-end graphics cards to drive the games. Power and cooling are big issues. And no, you can’t see the servers. It’s a secret.
Last week a couple of us went down to San Antonio to help represent the OpenStack project at Rackspace’s partner summit. While there I met up with the VAR Guy. Mr. Guy got me chatting about Dell’s Data Center Solutions group, where we’ve been and where we’re going. Below is the resulting video he put together featuring myself and San Antonio’s greenery. (See the original article this came from).
Some of topics I tackle:
How Dell’s Data Center Solutions Group is designing servers for high-end cloud computing
How Dell is integrating hardware with software in cloud servers
Coming soon: Dell Cloud Solution for Web Applications/Leveraging Joyent‘s software
The week before last I headed up to Chicago to attend our partner Aster Data’s Big Data Insights summit. One of the featured speakers was James Kobielus of Forrester Research, a leading expert on data warehousing, predictive analytics, data mining, and complex event processing.
With that background I thought Jim would be the perfect guy to ask about in-database analytics (where the actual analytics is colocated in the data warehouse rather than having to schlep data from the warehouse to a separate analytics application). So I did.
Some of the ground Jim covers:
Although there’s some new stuff there, in-database analytics is an old approach.
What’s new is we finally have an open interface/standard that allows a wider range of applications to be pushed down into the database and executed there.
Moving from proprietary interfaces/languages towards an open standard built on Map Reduce/Hadoop.
The benefits of this open approach
Aster Data as an evangelist for in database analytics
One of the key ingredients for the success of any open source project is a strong community manager. Coming on board to fill that role for the not-quite three-month-old OpenStack project is Stephen Spector. (If you’re not familiar with OpenStack, it’s an open source cloud platform).
Stephen made his first public appearance in his new role today at the Rackspace partner summit in San Antonio. I was able to catch Stephen first thing this morning before the summit kicked off.
Some of the ground Stephen covers:
His background: 14 yrs at Citrix. He initially ran developer alliance programs. He spent the last 3yrs running the Xen.org community.
Why Stephen joined OpenStack (he jumped at the chance to build a community from scratch).
He sees his role as that of a communication conduit
One of his first tasks is to find out who makes up the community e.g. developers, users, students, research, partners..
He’s very interested in making events like next months design summit successful as well as the importance of globalization.
The week before last a crew from Dell was out at NVIDIA’s GPU tech conference, showing our latest and greatest offerings in the HPC space. It looks like our PowerEdge C410x expansion chassis system caught the eye of the Register HPC blog writer Dan Olds.
Below are some excerpts from Dan’s article, “Dell gets busy with GPUs” followed by the video he shot. I love the video theme music and the fact that its a “BPV (Bad Production Values) presentation.” [BTW We’ll have to give Dan the full Data Center Solutions(DCS) rundown at some point so that he can see that when it comes to design and innovation, the C410x is not an outlier 🙂 ]
From Dan’s Article:
Okay, let’s put it on the table: when the conversation turns to cutting-edge x86 server design and innovation, the name “Dell” doesn’t come up all that often. Their reputation was made on delivering decent products quickly at a low cost. I see that opinion in all of our x86 customer-based research – it’s even something that Dell employees will cop to.
That said, two of the most innovative and cutting-edge designs on the GPU Tech Conference show floor were sitting in the Dell booth, and that’s the topic of this video blog….
It’s the second product that really captured my interest. Their PowerEdge C410x is a 3U PCIe expansion chassis that can hold up to 16 PCIe devices and connect up to eight servers with Gen2 x16 PCIe cables. Customers can use it to host NVIDIA Fermi GPU cards, SSDs, Infiniband, or any other PCIe device their heart desires. What made my motor run was the possibility of cramming it full of Fermi cards and then using it as an enterprise shared device – NAC: Network Attached Compute.
…Dell deserves kudos for putting out this box. It’s a step ahead of what HP and IBM are currently offering, and it moves the ball forward toward an NAC future.
I flew to Chicago today to support our partner Aster Data’s Big Data Insight summit. This Chicago event is a part of series of roadshows that Aster is doing for customers in cities in the US and Europe. Today’s event was held in the trendy Hotel Sax and featured talks from analysts as well as partners (SAS, Microstrategy and Dell).
Attending his first Aster roadshow was their brand new CEO, Quentin Gallivan. As the post-event happy hour was winding down I grabbed a few minutes with Quentin. Here is what he had to say:
Some of the topics Quentin covers:
What he did before Aster: CEO of BI SaaS provider Pivot link; CEO of Postini a SaaS email security company, and key exec at Verisign.
Why Quentin decided to join Aster.
How he heard about the opportunity.
What he see’s as Aster’s opportunity.
How the Dell partnership allows Aster to deliver a total solution.
A few weeks ago a group from NVIDIA was out visiting Dell. Their Tesla series of GPU cards are the primary cards that are used in our newly announced C410X expansion chassis. Filling up the C410X with NVIDIA cards and attaching it to a server can bring about ginormous increases in compute performance, helping to make HPC and scaled-out deployments wicked fast.
So how did NVIDIA get from rendering graphics for first person shooters to creating GPUs that accelerate modeling, simulation, imaging, signal processing, etc? Listen to the interview below with Geoff Ballew of NVIDIA’s Tesla unit and learn. 🙂
Some of the ground Geoff covers:
NVIDIA’s not just for gaming any more
A few years ago found that their graphic chips were getting a lot of raw math horsepower, so they added a few extra features into the chips and built a suite of software so that the graphic cards could be used for general computation.
How hard was it to convince HPC customers to take NVIDIA seriously in the compute arena?
A couple of days ago Bret Piatt, who handles Technical Alliances for OpenStack, came up to Austin to have further discussion with our team’s software engineers around OpenStack. If you’re not familiar with OpenStack, it is an open source cloud platform founded on contributed code from Rackspace and NASA’s Nebula cloud.
The project was kicked off a couple of months ago at an inaugural design summit held here in Austin. The summit drew over 25 companies from around the world, including Dell, to give input on the project and collectively map out the design for the project’s two main efforts, Cloud Compute and Object Storage.
Since the summit, and the project’s subsequent announcement the following week at the OSCON Cloud Summit, the community has been digging in. The first object storage code release will be available this month and the initial compute release, dubbed the “Austin” release, is slated for October 21. Additionally, the second OpenStack Design Summit has been set for November 9-12 in San Antonio, Texas, and is open to the public.
OpenStack visits Dell
During Bret’s visit to Dell he met with a bunch of folks including two of our software architects, Greg Althaus and Rob Hirschfeld. The three talked about how things were going with the project since the summit as well as specific ways in which Dell can contribute to the OpenStack project.
Below you can see where I crashed the three’s whiteboard session and made them tell me what they were doing. I then followed them, camera in hand, down to the lab where Greg and Rob showed Bret the system that we have targeted for running OpenStack.
Some of the topics (L -> R) Bret, Greg and Rob touch on:
Bret: Getting ready for the object storage release in September and compute in October. Looking to get the right hardware spec’d out so that people can start using the solution once its released.
Rob: Learning about how the project is coming together since the design summit. Interested in how the 3 code lines, storage, NASA compute and Rackspace compute, along with the input that was gathered at the Design summit and community input, are coming together.
Greg and Rob take Brett to the lab to show him the C6100 which could be a good candidate for open stack.
Next step, getting OpenStack in the lab and start playing with it.
iland is a provider of cloud computing infrastructure with high-availability data centers specifically designed for cloud computing in Boston, Washington D.C., Houston, Los Angeles, Dallas, and London. To stay competitive in the cloud infrastructure business, iland needs to gain as much value as possible from every watt of power and every square foot of data center floor space.
One day over lunch they were introduced to the PowerEdge C6105 and were impressed with the product’s density and power efficiency. One thing led to another and here is iland’s CTO Justin Giardina talking about why they are so interested int the PowerEdge C6105.
Some of the points Justin makes:
The primary reasons iland wanted to talk to Dell about the 6105 were density and power draw.
iland can stack 20 servers in one cabinet and since each 6105 has 4 servers in it by filling the cabinet with 6105s they can in affect get 80 servers (4X the compute power per cabinet).
The system only pulls 3 amps from both power supplies.