|
Linux Training 1825 Monetary Lane Suite #104 Carrollton, TX Do a presentation at NTLUG. What is the Linux Installation Project? Real companies using Linux! Not just for business anymore. Providing ready to run platforms on Linux |
On this page... (hide) Disk Type BenchmarksI was curious about doing some comparative benchmarks between things like SSD, SAS and SATA drive and arrays. The Players
The Nexsans are connected via 8Gbit FC, but the path to the storage is 4Gbit, so limited to 4Gbit speeds all in all. The size of the exposed LUNs from the Nexsan was 300G and the filesystem used throughout all tests was ext2 with the noatime mount option. Phoronix Testsuite Disk TestThe Phoronix Testsuite is supposed to be a "real world" set of tests. I found many of the result to simply be "wrong"... but only if you consider "real world" as being a "good" disk benchmark... and it's not since you'll be using things like the OS system cache in the "real world". So... here they are... just because people like to compare. I didn't find much competition from other disk runs that others did though. Update 2010-07-24: Same set as before, but added our 10Gbit NFS tests http://global.phoronix-test-suite.com/?k=profile&u=cjcox-17384-3855-8027 Bonnie++Bonnie++ gets a bad rap. It is NOT deserved. Bonnie++ is probably one of the best, if not the best and most consistent benchmarks that I've used to get a feel for performance. While it looks like it is focused on Sequential Reads and Writes only, do NOT discount the Rewrite column which I find to be very indicative of expected performance under mixed loads. With that said, bonnie++ is NOT the same as bonnie. The original bonnie leans on cache WAY too much in most tests. Note: The sas-raid array was tuned for Random IO instead of Seq or Mixed IO (the other Nexsans were tuned for Mixed IO). So the values for the sas-raid might be lower than they should be. See updated note below for IOPS where I changed sata-raid for Random IO. Update: I was a bit disappointed by the ssd-raid runtimes (misnamed raid-ssd), so I reran that one... some values went down, but the Seq Block In number went up to 466542, which is much better. However, RW (ReadWrite) was still VERY low and that's usually indicative of typical performance. Still, apart from faster seeks and IOPS, the SATA and SAS raids performed well against it. Also, note we used RAID0 instead of RAID0+1 on the SSD array. IOPS using fio (WARNING: logarithmic scale ahead!!)Wow... the fusion-io board screams IOPS. Be warned that the graph is on a logarithmic scale (just so the other values will show up). Test used: fio --filename=/dev/devicename --direct=1 --rw=randread --bs=4k --size=5G --numjobs=64 --runtime=10 --group_reporting --name=file1 fio --filename=/dev/devicename --direct=1 --rw=randwrite --bs=4k --size=5G --numjobs=64 --runtime=10 --group_reporting --name=file1 Note: Not sure what happened on the sata-raid read IOPS. In the past I know I've gotten numbers like 1700 or so... so I'm just going to chalk the value up as "error". Update: I reran the test several times on the sata-raid for read IOPS. The value never got higher than 888. So the result is what should be expected from the device. Update 2010-07-04: It was still bugging me, the sata-raid results, so I reconfigured the array to optimize for random i/o and reran: 1930 Read IOPS and 1593 Write IOPS Note: Writes to the Nexsan CAN be higher just because of the onboard 2G cache on the units themselves. |