dcsimg

Database Storage Performance Testing in a Hurry

Sometimes you need answers to important questions quickly. When benchmarking new disk and disk-like subsystems, how can you get relevant and useful info without a lot of time?

I recently had the opportunity to do some performance testing of new hardware with the goal of helping to identify and ultimately recommend a new platform that will replace some aging infrastructure. Like most geeks, I usually salivate at the opportunity to put my hands (at least figuratively speaking) on some of the latest and greatest hardware to see what I can make it do. There are so many things you can play with to try and find its limits.

Unfortunately, this had to be a rush job. Due to a few logistical problems, fires that needed fighting, and other priorities I ended up in a situation where I had little time so I needed to focus on getting the most useful and relevant answers. More work and less play. That means not worrying about getting exact numbers for everything. I was mainly interested in numbers that were close enough but that I could also have some confidence in. Being able to say that moving from X to Y will give us a 4X boost in performance is sufficient for my purposes.

This week, I’ll discuss the tests I was able to run, how I ran them, and why.

Hardware

First off, you’re probably wondering what the hardware in question was. Simply put, I was trying to find the expected performance and capacity increases that would result from moving from an existing system comprised of an Adaptec powered RAID-10 array of 15,000RPM SAS drives to any of the following:

  • a SAN volume backed by a similar number of disks
  • a single SLC SSD
  • a Fusion-io card
  • an array of SSDs

Sadly, the last configuration was not available when I performed this testing. It is planned for the near future.

As luck would have it, we had one “classic” server available with the local RAID-10 array and a second newer server that had the SSD, Fusion-io card, and the Fiber Channel interface to the SAN. That allowed me to make “old server” vs. “new server” comparisons without lots of component swapping or having too many other variables involved.

What Matters

Having little time and lots to do, I decided that I was mainly interested in the “painful” cases we see. For example, it’s interesting to note that the Fusion-io card has the advantage of being plugged into a PCI express slot and is not subject to the limitations of more traditional disk I/O channels. That means you can move a lot of data VERY quickly. But that’s not a concern in most of our cases.

The servers generally have a fair amount of RAM (32GB is common) and most of that is used for InnoDB’s buffer pool. What does hurt are queries that cause InnoDB to hit the disks to fetch pages that haven’t been cached. These are generally small random reads that are scattered all over the disk. As I’ve discussed in previous articles, this is a latency (seek time) issue, not a throughput issue.

But the Fusion-io card is SLC memory based, so the “seek” times should be VERY small. The same is true for the regular SSD.

So I wanted to find out how much smaller those seek times were–or at least what the real impact was. But I also wanted to construct a few simple but real-world “worst case” tests that I could use to judge how well each storage system was able to service the worst of the behavior we see on a regular basis.

The Custom Tests

To do get some basic answers, I constructed two scenarios.

First, I want to see how long it takes a database slave to catch up once it has fallen behind in replication. Second, I wanted to see long it would take to perform a full scan of a reasonably large table which had not yet been cached (much). In both cases, I wanted to rule out any operating system caching effects, so that meant rebooting both servers before starting the test.

For the first scenario, that meant doing the following:

  • make sure both servers are slave of the same master, re-cloning if needed
  • let them catch up, watching SHOW SLAVE STATUS output in each
  • reboot each server with mysql set not to start
  • wait some amount of time (30-60 minutes)
  • start mysql on both hosts, note start time
  • wait for catch-up on both systems, again watching SHOW SLAVE STATUS

Since I was always comparing to the existing server as a baseline, that meant doing Server A RAID vs. Server B on the SAN, and then Server A RAID vs. Server B Fusion-io, and then Server A RAID vs. Server B on the SSD, and so on. We cannot just use a single Server A RAID test since the data is different every time.

Similarly for the full scan test, I took advantage of the fact that a simple SELECT COUNT(*) FROM mytable on an InnoDB table involves reading in a lot of pages from disk. Again, performed on a freshly rebooted box (this time with MySQL running but not slaving), we get a good sense of how much better the performance is (or is not) using various I/O subsystems.

The Ususal Suspects

In addition to the application specific tests, I wanted to get some standardized numbers as well. For those I turned to two classic tools: bonnie++ and iozone. Both are very well proven at finding oddities in new hardware and filesystems by testing a wide variety of access patterns, file sizes, read/write ratios, and so on.

I was a little surprised to find that bonnie++ had trouble reporting some performance numbers. I’d run a 100GB test like this:

$ /usr/sbin/bonnie++ -d /mnt/foo -f -s 100g

But the resulting output would have strings of ++++ where I expected numbers–important numbers! After a big of reading (notably the man page), I found the answer.

For every test two numbers are reported, the amount of work done (higher numbers are better) and the percentage of CPU time taken to perform the work (lower numbers are better). If a test completes in less than 500ms then the output will be displayed as “++++”. This is because such a test result can’t be calculated accurately due to rounding errors and I would rather display no result than a wrong result.

Bingol. It’s good to know that some tests in particular are performed “very fast” but it’s still good to be able to quantify things. How fast?!

So I eventually settled on timing the bonnie++ runs. As long as I didn’t vary the parameters (just the mount point or directory where I tell it to work), I’m reasonably confident that the elapsed time to run the test will help point me at the interesting differences.

Results

While I’m not going to proclaim that device X is Y times faster than device Z here (maybe in the future), I can say that these simple tests gave me enough data and confidence to say things like that to help narrow down the options. I know how much of a boost we can expect to see out of the various options for our I/O bound tasks. And thanks to bonnie++ and iozone I can do rough cross-checking of my results as well as getting some baseline tests that measure throughput of each subsystem as well.

If I had a lot more time, there’s almost an infinite amount of testing I could have done. But for just a few days worth of sporadic testing I managed to get some meaningful results. What tactics have you used to test new hardware in a hurry? I’d love to hear about it in the comments below.

Comments on "Database Storage Performance Testing in a Hurry"

fbacchella

It\’s a pity you forget to do parellels IO test. For heavy-duty storage like a SAN, that\’s the point you expect to see real difference.
And when you setup a database server, you can expect more on parellels IO than pure sequential and bandwith IO.

ksbhaskar

In a matter of minutes, io_thrash (http://sourceforge.net/projects/fis-gtm/files/io_thrash/20081110/io_thrash_20081110.tgz/download) which is part of the GT.M project at Source Forge (http://sourceforge.net/projects/fis-gtm/) can generate an intense multi-process parallel IO load that is representative of a real database engine. We use that for quick characterizations of systems when we don\’t have the time or resources for thorough benchmarking.

Disclosure: I manage GT.M.

jzawodn

fbacchella: I\’m not sure what you mean. I happen to know how many spindles were allocated to this particular SAN volume and the types of disks behind them.

ksbhaskar: thanks for the tip!

Im grateful for the article.Really thank you! Will read on…

It as really a great and useful piece of information. I am glad that you shared this helpful info with us. Please keep us up to date like this. Thanks for sharing.

bWdVmI Ultimately, a problem that I am passionate about. I have looked for data of this caliber for the previous various hours. Your site is greatly appreciated.

Thanks a lot for the blog article.Really looking forward to read more. Will read on…

“Something more important is that when you are evaluating a good on the internet electronics shop, look for web stores that are consistently updated, keeping up-to-date with the most current products, the best deals, in addition to helpful information on goods and services. This will make certain you are dealing with a shop that stays over the competition and gives you what you should need to make intelligent, well-informed electronics buys. Thanks for the vital tips I’ve learned through the blog.”

I am truly thankful to the holder of this website who has
shared this fantastic article at here.

34WUu8 cczsjlamufck, [url=http://ltcccwijhbqw.com/]ltcccwijhbqw[/url], [link=http://ncfmszihyyid.com/]ncfmszihyyid[/link], http://xwfqiyseciqw.com/

I think the article is very helpful for people,it has solved my problem,thanks!
hermes h belt buckle

Check below, are some totally unrelated sites to ours, however, they are most trustworthy sources that we use.

Hi colleagues, its great post concerning tutoringand fully explained, keep it up all the time.

Stop by my blog post … LydiaZMunsch

Excellent post. I am going to be facing some of these issues at the same time..

my site – JarredAOdell

Leave a Reply