Previously, we examined the impact of journal size using a separate disk on metadata performance as measured by fdtree. In this follow-up we repeat the same test but use a ramdisk for the journal, thereby boosting the best performance. Or does it?
In a previous article we took a look at metadata performance and found that regardless of whether the journal was on a separate disk or a ramdisk performance was almost the same. However, the size of the journal was very small compared to the size of the file system (0.0032%) perhaps impacting performance.
The article linked above was all about metadata performance using Ext4 with a journal device on a separate disk. The key variable in the quest for ultimate performance, the journal size, was varied for a fixed file system size. Metadata performance was measured using the fdtree benchmark which has been used in a number of our previous metadata-focused articles.
In this article the journal is placed on a ramdisk and its size is varied to understand the impact on metadata performance measured by the metadata benchmark fdtree. The goal of placing the journal on a ramdisk is to test the performance when the fastest possible device is used for the journal. However, be warned that the journal will not survive a reboot so the file system cannot be mounted normally. The only practical way the journal can be placed on a ramdisk and survive a reboot is to sync the file system, unmout it, and then shutdown the system. When the system is rebooted, the ramdisk based journal needs to be recreated and the file system is remounted. This process is likely to be a manual process but could be automated in a script.
Testing Discussion
As with the previous article, four journal sizes are tested to understand the impact of journal size on metadata performance. The four journal sizes are:
- 16MB (0.0032% of file system size)
- 64MB (0.0128% of file system size)
- 256MB (0.0512% of file system size)
- 1GB (0.2% of file system size)
A ramdisk of the appropriate size is created and is then utilized for the journal for the ext4 file system.
CentOS 5.3 was used as the OS for the testing. By default, the maximum ramdisk size is 16MB. To increase the ramdisk size, a parameter is passed to the kernel during boot. The command line in /etc/grub.conf is added to the kernel boot line so it looks like the following:
kernel /vmlinuz-2.6.30 ... ramdisk_size=64000
The parameter was changed to the appropriate size and the system was rebooted. Then the device, /dev/ram0, was used as the journal device.
The details of the metadata testing can be found in the previous article and won’t be repeated here except to say that four scenarios were tested.
Each test was run 10 times for the four journal sizes. The test system used for these tests was a stock CentOS 5.3 distribution but with a 2.6.30 kernel. In addition, e2fsprogs was upgraded to 1.41.9. The tests were run on the following system:
- GigaByte MAA78GM-US2H motherboard
- An AMD Phenom II X4 920 CPU
- 8GB of memory (DDR2-800)
- Linux 2.6.30 kernel
- The OS and boot drive are on an IBM DTLA-307020 (20GB drive at Ulta ATA/100)
- /home is on a Seagate ST1360827AS drive
- There are two drives for testing. They are both Seagate ST3500641AS-RK drives with a 16 MB cache each. These drives show up as devices,
/dev/sdb
and /dev/sdc
.
The first Seagate drive, /dev/sdb
, was used for the file system and was used exclusively in these tests.
The details of creating an ext4 file system with a journal on a separate device are contained in a previous article. The basic steps are to first create the file system assuming the journal is located with the file system on the drive. Second, a new journal is created on a ramdisk (/dev/ram0). Finally, the file system is told that that it no longer has a journal and then it is told that it’s journal is on the specific device (the ramdisk).
Benchmark Results
This section presents the results for the four scenarios. The results are plotted along with the error bars to allow easy comparison. However, the full results are available in tabular form at the end of the article after the summary.
The first test is for the “small file, shallow structure” scenario for the four journal sizes. The “file create” test ran for over 300 seconds and the “file remove” test ran for a bit over a minute. Consequently, these tests are considered worthwhile for examination. Figure 1 below plots the average file create performance in KiB per second for the four journal sizes. Also note that error bars representing the standard deviation are shown.

Figure 1: Average File Create Performance (KiB per second) for the Small File, Shallow Structure Test for the Four Journal Sizes
Figure 2 below plots the average “File Remove” results in “File Removes per second” for the four journal sizes for the small file, shallow structure scenario. Again, there are error bars representing the standard deviation in the plot as well.

Figure 2: Average File Remove Performance (File Removes per second) for the Small File, Shallow Structure Test for the Four Journal Sizes
The next test uses small files but with a deep directory structure. For this scenario all four tests had run times long enough for consideration. The “Directory Create” test ran from an average of 120.1 seconds to 312.4 seconds depending upon the journal size. The “File Create” test ran an average of 328.5 seconds to 566.6 seconds. The “File Remove” test ran an average of 136.6 seconds to 333 seconds. And finally the “Directory Remove” test ran from an average of 52.5 seconds to 253.0 seconds.
Figure 3 below plots the average “Directory Create” results in “creates per second” for the four journal sizes for the small file, deep structure scenario. Again, there are error bars representing the standard deviation in the plot as well.

Figure 3: Average Directory Create Performance (creates per second) for the Small File, Deep Structure Test for the Four Journal Sizes
Figure 4 below plots the average “File Create” results in KiB per second for the four journal sizes for the small file, deep structure scenario. Again, there are error bars representing the standard deviation in the plot as well.

Figure 4: Average File Create Performance (creates per second) for the Small File, Deep Structure Test for the Four Journal Sizes
Figure 5 below plots the average “File Remove” results in removes per second for the four journal sizes for the small file, deep structure test.

Figure 5: Average File Remove Performance (removes per second) for the Small File, Deep Structure Test for the Four Journal Sizes
Figure 6 below plots the average “Directory Remove” results in removes per second for the four journal sizes for the small file, deep structure test.

Figure 6: Average Directory Remove Performance (removes per second) for the Small File, Deep Structure Test for the Four Journal Sizes
The next test was the medium files, shallow directory structure scenario. The only result that had a meaningful run time was the file create test (154.2 seconds to 154.4 seconds). Figure 7 below plots the the file create performance in KiB per second for the four journal sizes. Also note that the error bars are plotted as well.

Figure 7: Average File Create Performance (KiB per second) for the Medium File, Shallow Structure Test for the Four Journal Sizes
The final test was the medium files, deep directory structure scenario. The only result that had meaningful times was the file create test (220.4 seconds to 225.9 seconds). Figure 8 below plots the the file create performance in KiB per second for the four journal sizes. Also note that the error bars are plotted as well.

Figure 8: Average File Create Performance (KiB per second) for the Medium File, Deep Structure Test for the Four Journal Sizes
Benchmark Observations
The benchmark results are very interesting since we actually see some variation in the results whereas in the first article we did not seem much variation. A quick summary of the results is given below.
- Small files, shallow directory structure:
- From Figure 1, increasing the journal size to 256MB increased the average file creation by 6% (from 3,837.2 KiB/s to 4,080.1 KiB/s). Increasing the journal size to 1GB did not increase the file create performance by any measurable amount.
- From Figure 2, the file remove performance increased by 25% when the journal size went from 16MB to 256MB. But increasing the journal from 256MB to 1GB didn’t make an appreciable change in performance.
- Small files, deep directory structure:
- The directory creation performance increased dramatically as the journal size increased (Figure 3). The average performance increased by 163% (from 282.8 KiB/s at 16MB to 743.6 KiB/s at 1GB).
- The file creation performance also increased fairly remarkably as seen in Figure 4. It increased by 58% (2,267.8 KiB/s at 16MB to 3,572.8 KiB/s at 1GB).
- The file removal rate (removes per second) increased by 144% in going from a 16MB journal to a 1GB journal (1,063.6 removes/s to 2,599.6 removes/s). See Figure 5 for the image.
- The most remarkable change in performance was for the directory removal result as seen in Figure 6. The directory removal rate (removes per second) increased by 410% in going from a 16MB journal to a 1GB journal (504.46 removes/s to 1,849.4 removes/s). However, the variation in results at 1GB is very large.
- Medium files, shallow directory structure
- The file create performance actually decreased slightly going from 16MB to 1GB (Figure 7). The decrease in average performance was only about 2% but the results are within the error band of the others. The reason for the decrease is unknown at this time.
- Medium Files, deep directory structure
- The file creation performance (in KiB/s) only really changed in going from a 16MB journal size to a 64MB journal size (see Figure 8). The performance with a 64MB journal size is about 2% better than with 16MB (73,971.3 KiB/s vs. 72,495 KiB/s).
- After 64MB the file creation performance did not change appreciably and the average performance at 1GB actually decreased slightly but is within the error bounds of the 64MB and 256MB journal sizes.
Summary
This is the next article in a series examining the impact of journal size and device on ext4 performance. The previous article examined the performance when the journal is on a separate disk. This article examines the performance when the journal is placed on a ramdisk. It is expected that the ramdisk based journal will produce the best metadata performance results.
As with the previous test the metadata benchmark, fdtree, was used for testing the performance of four journal size: 16MB, 64MB, 256MB, and 1GB. For all four sizes, the journal was placed on a ramdisk. As with previous metadata testing, four scenarios were tested: (1) small files (4 KiB) and a shallow directory structure, (2) small files (4 KiB) and a deeper directory structure, (3) medium sized files (4 MiB) and a shallow directory structure, and (4) medium sized files (4 MiB) and a deeper directory structure.
The results are as interesting as they were for the journal device on a second disk. Increasing the journal size generally results in an increase in the file create performance for all scenarios as it did with the disk based journal. The performance increased from a little over 2% to over 58% depending upon the scenario. However for the medium file (4 MiB), shallow directory structure scenario, the average performance actually decreased slightly.
The file remove performance was greatly affected by journal size. Increasing the journal size increased the file removal performance by 25% for the small files, shallow directly structure scenario and 144% for the small files, deep directory structure scenario.
The most remarkable change in performance was for the directory remove test for the small files, deep directory structure scenario where the performance increased by 410% when going from a 16MB journal to a 1GB journal.
Will Batman escape the Catwoman’s Cluthes? Will Gotham City ever switch to Linux from the evil Joker’s Windows domination? Will Jeff ever finish the testing of ext4 journal options? Say tuned to the next article to find out.
Next: Table of Results
Comments on "Size Can Matter: Ramdisk Journal Metadata Performance – Part 2"
I just want to say I’m very new to blogging and site-building and really loved your page. Likely I’m going to bookmark your site . You absolutely have superb article content. Regards for sharing with us your website page.
Hurrah! Finally I got a weblog from where I know how to actually get helpful
data concerning my study and knowledge.
Feel free to visit my web page: Join free
Heya i am for Living In the Complex first time here.
I found this board and I find It really useful & it helped me out much.
I hope to give something back and help others like you helped
me.
I was recommended this blog by my cousin. I am not sure whether this post is written by him as no one else know such detailed about my problem. You are wonderful! Thanks!
Simply desire to say your article is as surprising.
The clarity on your submit is simply cool and i can think you are knowledgeable in this subject.
Well together with your permission let me to seize your RSS feed to keep updated with approaching post.
Thanks 1,000,000 and please carry on the enjoyable work.
My web site rentals
Im grateful for the blog. Much obliged.
This is the right website for everyone who really wants to understand this topic.
You know a whole lot its almost tough to argue with you (not that I actually will need to…HaHa).
You definitely put a fresh spin on a topic which has been written about for decades.
Great stuff, just excellent!
Feel free to visit my page canadian pharmacy
Heya i’m for the first time here. I came across this board and I to find
It truly useful & it helped me out much. I hope to present something back and help others such as
you aided me.
Great blog article. Really Cool.
Thank you a lot for sharing this with all of us you actually know what you are talking approximately!
Bookmarked. Kindly also consult with my web site
=). We will have a link exchange agreement among
us
Hey, thanks for the blog. Cool.
What’s up, this weekend is nice designed for me, as this point
in time i am reading this wonderful educational piece
of writing here at my house.
Take a look at my page … buy viagra online canada pharmacy
I blog frequently and I truly thank you for your information. This article hass truly peaked my interest.
I am going to take a note of your blog and keep checming
foor neew information about once per week. I subscribed to your Feed as well.
my webpage … UK Sports Bookies promotional codes
Thanks-a-mundo for the blog article.Really thank you! Will read on…
I absolutely love your website.. Very nice colors & theme.
Did you develop this site yourself? Please reply back as I’m attempting to create my own personal blog
and would love to find out where you got this from or just what the theme is named.
Cheers!
My blog ??????????
It’s a pity you don’t have a donate button! I’d definitely donate to this
excellent blog! I suppose for now i’ll settle for bookmarking and
adding your RSS feed to my Googlke account. I look forward to brand new updates aand will
talk about this blog with my Facebook group. Chat soon!
My weblog: Swimming Betting
We are a group of volunteers and opening a new scheme in our community.
Your web site provided us with valuable information to work on. You’ve done a formidahle
job and ourr whole community will bbe grateful to you.
Check out my web page: online Volleyball betting site
I just want to tell you that I’m beginner to blogs and honestly loved you’re web-site. Most likely I’m want to bookmark your blog . You actually have outstanding articles and reviews. With thanks for sharing with us your web page.
Hey, thanks for the post. Much obliged.
Enjoyed every bit of your blog article.Thanks Again. Awesome.
My family all the time say that I am wasting my time here at net, but I know I am getting familiarity daily
by reading such fastidious content.
This piece of writing is trulky a pleasanjt oone it assists new net people, who are
wishing in favor of blogging.
Look into my web page – dyson vacuums
Saved as a favorite, I really like your web site!
Feel free to visit my blog – dyson animal complete
I really enjoy the blog post.Really thank you! Will read on…
I value the post.Much thanks again.
Great article.Really thank you!
Every weekend i used to pay a quick visit this web site, for the reason that i want enjoyment, since this this web page
conations truly pleasant funny stuff too.
Here is mmy weblog; Rugby Bookies promotional codes
Muchos Gracias for your post.Really looking forward to read more. Keep writing.
Very neat article. Great.
Managem?nt Frau? often involves a senior managem?nt.
For most small non-farm businesses, the deductions are taken on a S?hedule C (Profit or Loss From Business), or
a Schedule C-EZ (for thos? with deduuctions of less than $5000).
Consider the ?aeger CPA Review Course – ?f yo? are look?ng for an afforda?le revi?w co??se, then yyo?
should undoubtedly think about makikng use ?f the Yaeger CPA review course.
my blog post: boca raton Accounting firms
My partner and I absolutely love your blog and find the majority of your post’s to be exactly what
I’m looking for. Do you offer guest writers to write content for you?
I wouldn’t mind writing a post or elaborating on many of the subjects you write concerning here.
Again, awesome site!
my blog post boligalarmxyz
Very good blog.Much thanks again.
We are a group of volunteers and starting a new scheme in our community.
Your website offered us with valuable info to work on. You have done an impressive job and our whole
community will be thankful to you.
I just couldn’t go away your web site prior to suggesting that I extremely enjoyed the usual info a person provide on your guests?
Is going to be back steadily in order to inspect new posts
my website :: Video file
I simply want to tell you that I am beginner to blogging and site-building and really loved you’re blog site. Likely I’m want to bookmark your website . You absolutely come with amazing posts. Bless you for revealing your web site.
Here is a good Blog You might Locate Exciting that we encourage you to visit.