x
Loading
 Loading
Hello, Guest | Login | Register
Today's HPC Clusters Resource Center

Dual-Core Calisthenics

Got performance? A simple test provides a peek into the AMD and Intel Dual-core processor designs.

Sharing Numbers

When I use the script, I prefer the NAS Parallel Benchmark Suite (Version 2.3) compiled for a single processor, or in this case, one core. (Download the suite from http://www.nas.nasa.gov/Resources/Software/npb.html.) The NAS suite is a group of programs designed to help evaluate the performance of parallel supercomputers. The benchmarks, which are derived from computational fluid dynamics (CFD) applications, consist of five kernels and three pseudo-applications.

Since what the programs actually do isn’t as important as measuring their relative performance, I won’t go into what each program is designed to accomplish. I can say, however, that for the most part they reflect real computations that one might do on a cluster. If you’re curious as to their purpose, consult the previously mentioned Web site.

One important aspect of the benchmarks is that they’re self checking. A self check is often missed in many benchmark efforts. All benchmarks results should be checked for correctness. Fast is one thing, and the right answer is another. In HPC, you need both.

As mentioned, I used a Pentium D 940 running at 3.2 GHz. The motherboard used was an Intel Model SE7230NH1 (based on the Intel E7230 chipset). The software environment was Fedora Core 4 with GNU 4.02 compilers. GNU 4.x now uses gfortran instead of g77.

The NAS benchmarks are available in various sizes. For these tests, I used the “A” class, which ran for about 30 minutes using the script. The results are shown in…

Please log in to view this content.

Not Yet a Member?

Register with LinuxMagazine.com and get free access to the entire archive, including:

  • Hands-on Content
  • White Papers
  • Community Features
  • And more.
Already a Member?
Log in!
Username

Password

Remember me

Forgotten your password?
Forgotten your username?
Read More
  1. Cluster 3.0: Dynamic Provisioning with MOAB and XCAT
  2. InfiniBand Interconnects for Computing Clusters
  3. Optimizing Performance for HPC: Part 2 - Interconnect with InfiniBand
  4. Optimizing the Nehalem for HPC
  5. Sledgehammer HPC