x
Loading
 Loading
Hello, Guest | Login | Register
Today's HPC Clusters Resource Center

Before You Commit: Four Key Questions To Ask About InfiniBand

Forget the glossy data sheets and single number benchmarks. Get the right information to make the right decisions.

Can I Virtualize I/O?In a traditional HPC cluster two networks are used to avoid congestion. One network is used for storage and the other is used for MPI (Message Passing Interface) traffic. This method while function fails to take advantage of the true I/O capability of a high performance network. In addition, it adds a layer of cost and complexity (more place for things to fail) to the cluster. One solution to this problem is to place both storage and compute traffic on the same interconnect.

An example of storage/compute convergence is the QLogic VIC (Virtualized I/O Controller) technology. By implementing a unique multi-protocol VIC controller in their InfiniBand switches transparent access to either Fibre Channel or Ethernet networks can be achieved. Figures Two and Three illustrate the VIC technology for a database cluster. HPC clusters can enjoy similar advantages.

Figure Two: Database Clustering and I/O Deployment using Traditional FC and Dual GigE Network
Figure Two: Database Clustering and I/O Deployment using Traditional FC and Dual GigE Network

Figure Three: QLogic DB Cluster and Consolidated I/O With VIC, multi-protocol controller
Figure Three: QLogic DB Cluster and Consolidated I/O With VIC, multi-protocol controller

Is there Robust Software and Tool Support?Support for high performance interconnects…

Please log in to view this content.

Not Yet a Member?

Register with LinuxMagazine.com and get free access to the entire archive, including:

  • Hands-on Content
  • White Papers
  • Community Features
  • And more.
Already a Member?
Log in!
Username

Password

Remember me

Forgotten your password?
Forgotten your username?
Read More
  1. Cluster 3.0: Dynamic Provisioning with MOAB and XCAT
  2. InfiniBand Interconnects for Computing Clusters
  3. Optimizing Performance for HPC: Part 2 - Interconnect with InfiniBand
  4. Optimizing the Nehalem for HPC
  5. Sledgehammer HPC