x
Loading
 Loading

The Ignorance is Bliss Approach

Dynamic Parallel Execution: Loosing control at the high end.

In last month’s column, I talked about programming large numbers of cluster nodes. By large, I mean somewhere around 10,000. To recap quickly, I pointed out that dependence on large numbers of things increase the chance that one of them will fail. I then proposed that it would be cheaper to develop software that can live with failure than try to engineer hardware redundancy. Finally, I concluded that adapting to failure requires dynamic software. As opposed to statically scheduled programs, dynamic software adapts at run-time. The ultimate goal is to make cluster programming easier: focus more on the problem and less on the minutiae of message passing.
At the end of last month’s article, I promised to mention some real alternatives to the Message Passing Interface (MPI) and even suggest some wild ideas. I plan to keep my promise, but I want to take a slight detour this…

Please log in to view this content.

Not Yet a Member?

Register with LinuxMagazine.com and get free access to the entire archive, including:

  • Hands-on Content
  • White Papers
  • Community Features
  • And more.
Already a Member?
Log in!
Username

Password

Remember me

Forgotten your password?
Forgotten your username?
Read More
  1. A Cluster In Your Pocket
  2. Moving HPC Closer to The Desktop
  3. Cluster In The Clouds
  4. Summer Language Seminar: The R Project
  5. Blowing The Doors Off HPC Speed-up Numbers
Follow linuxdlsazine