In stark contrast to one year ago, this year's show was well-attended and upbeat.
Last week I attended the HPC on Wall Street meeting at the Roosevelt Hotel in New York City (Follow the link to the speaker slides). What a difference a year can make. If you recall last year, the September meeting was right in the middle of the Wall Street meltdown. Lehman and Merrill were recent casualties and everyone seemed to be waiting for the other shoe to drop. Attendance was down and the most asked question seemed to be “How are you guys doing?” (“guys” meaning a company) There was also some justified panic in the air as the “Wall Street — What could go wrong?” boat was taking on more water than most people suspected.
I don’t want to dwell in the past, other than to say, at the event last year, some attendees were of the opinion that IT, and in particular HPC on Wall Street, would take several years to rebound. The abrupt failures of Lehman and Merrill would mean lots of extra hardware floating around and reduce sales. Furthermore, HPC clusters are used to calculate the risk associated with those derivative things we were hearing so much about. Once you know the risk, then you can set a price. The only problem is those fancy Monte Carlo calculations never had a variable for those mortgages that were being sold to anyone who could hold a pen. Thus, some introspection and retrenching seemed to be in order.
Jump ahead to this year, and it seemed like nothing ever happened! The break is over, everybody back in the pool. Of course, this is based on my casual observations, but there was a huge crowd and I don’t think they were looking for jobs. Indeed, people were talking about HPC and all the goodies that go with it. There were also a large number of vendors, which I took as a positive sign. As for myself, I was pulling double duty. I was working as a booth geek at the Appro International table and as the linuxdlsazine HPC dude. As a journalist, I did have a chance to sit down with Erick Troan (of Red Hat RPM fame) and talk about his new company rPath. I’ll have more on the rPath “deep version control” technology in the future.
There is a little more to the Appro gig, but first, I want to jump back to last week where I talked about how 10 GigE (like both Fast Ethernet and GigE) would come down in price and see wide use in HPC. My reason was the simplicity of Ethernet. Now, before all my InfiniBand friends jump all over me, I also said, IB is the high performance leader and it is already faster than 10-GigE. I am rehashing this story because, my buddies over at HPCwire, posted an introduction to the story and then posted a rebuttal from reader Patrick Chevaux. And then, in a comment to Patrick’s post, Open MPI developer/leader Jeff Squyres indicated the number of lines of code in Open MPI for each different network transport.
- Myricom MX (Myrinet 10G and Ethernet): 1,210 and 2,331
(Open MPI has 2 different “flavors” of MX support)
- Shared Memory: 2,671
- TCP (used by Ethernet): 4,159
- OpenFabrics (InfiniBand): 18,627
That is correct. The IB OpenFabrics interface requires about 4.5 times more code than using the TCP layer and at least 7 times more than shared memory or the Myricom MX interface (which is available for GigE through the Open-MX Project). Again, IB is good stuff and the list above does not speak to performance, but it does give an indication of the complexity of various transport layers.
As I said, there are those that prefer simplicity to performance. Perhaps a car analogy will help. There are all types of cars. They all can get you from point A to point B. If you need speed, then a Formula 1 car is your best bet. Such cars are not simple to drive, only hold one person, require specialized parts, but boy can they go. On the other hand, if you are not as interested in speed, you choose a slow pickup truck. It may not have the speed, but it just works, it is dependable, can haul lots of stuff, parts are cheap, and many of your neighbors have the same model. It is all about your needs and budget. InfiniBand is like the F1 car and Ethernet is like the pickup truck. (And please, don’t quote this out of context.)
Coming back to HPC on Wall Street, the finance models that run on clusters are not as sensitive to interconnect speeds as many other codes, thus they are a good fit for GigE and 10 GigE. Prior to the event, Appro hired me to write a white paper about 10 GigE and clustering. In the paper, I discussed various cluster designs using Appro hardware and Arista 10GBASE-T switches. (10GBASE-T is 10 GigE with standard RJ-45 connectors and Cat 6 cabling). Part of my job at the show was to answer questions about my freshly minted white paper. The paper is currently available without registration from Appro. The upshot is, if Ethernet works for you, the comfort level you now enjoy can continue with your next cluster. The white paper helps you understand how to implement a cluster using 10GBASE-T (or SFP+) Ethernet.
I expect a few more chapters to the 10 GigE story this year. I find it rather comforting that discussions about HPC interconnect options are taking place. Last year at this time, the only options people seemed to be talking about are the now worthless ones.
Twitter update: Did you read my announcement about HPC for Dummies? If you were one of the faithful followers this would be old news.
Fatal error: Call to undefined function aa_author_bios() in /opt/apache/dms/b2b/linuxdls.com/site/www/htdocs/wp-content/themes/linuxmag/single.php on line 62