High-End LSI controllers and the effect of large caches on I/O

On all of my Poweredge machines, I have a 6Gbps 2Tb Samsung EVO 870 as boot drive. They are configured as a single LUN RAID-0. Here's an example with my T140 (sporting an H740P PERC):


# megaclisas-status

-- Controller information --
-- ID | H/W Model | RAM | Temp | BBU | Firmware
c0 | PERC H740P Adapter | 8192MB | 54C | Good | 50.5.1-2818


-- Array information --
-- ID | Type | Size | Strpsz | Flags | DskCache | Status | OS Path | CacheCade |InProgress
c0u0 | RAID-0 | 1818G | 512KB | ADRA,WB | Enabled | Optimal | /dev/sda | None |None
[....]

-- Disk information --
-- ID | Type | Drive Model | Size | Status | Speed | Temp | Slot ID | LSI ID
c0u0p0 | SSD | S620NG0R303366N Samsung SSD 870 EVO 2TB SVT02B6Q | 1.818TB | Online, Spun Up | 6.0Gb/s | 38C | [:0] | 0
[....]

These drives typically max out unbuffered I/O at around 550MB/s, yet on a freshly booted idle machine, I was getting higher readings by running hdparm repetitively.


# hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 19358 MB in 2.00 seconds = 9696.52 MB/sec
Timing buffered disk reads: 2894 MB in 3.00 seconds = 963.91 MB/sec

# hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 18772 MB in 2.00 seconds = 9403.51 MB/sec
Timing buffered disk reads: 1588 MB in 3.00 seconds = 529.11 MB/sec

# hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 18702 MB in 2.00 seconds = 9365.57 MB/sec
Timing buffered disk reads: 2950 MB in 3.00 seconds = 983.32 MB/sec

# hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 19044 MB in 2.00 seconds = 9536.41 MB/sec
Timing buffered disk reads: 4130 MB in 3.00 seconds = 1376.28 MB/sec

# hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 18786 MB in 2.00 seconds = 9408.40 MB/sec
Timing buffered disk reads: 5140 MB in 3.00 seconds = 1712.92 MB/sec

# hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 19914 MB in 2.00 seconds = 9974.09 MB/sec
Timing buffered disk reads: 1586 MB in 3.00 seconds = 528.66 MB/sec
#

Where did the higher numbers come from? hdparm does unbuffered I/O and bypasses the VFS cache.

I believe that this is the PERC Controller Cache at work.

The machine used in this test run has a PERC H750P with an 8GB cache.

With unbuffered I/O I never saw anything above 550 MB/sec on a Samsung EVO 870 but here it's the larger cache doing its job.

Such things are why I tend to favor RAID-0 LUNs, even for ZFS or Ceph.

Sure, 8G of cache is minimal compared to the size of modern disks, but in the case of I/O hotspots, a beefy cache can help.


Comments

Popular posts from this blog

VMWare Worksation 12 on Fedora Core 23/24 (fc23 and fc24)

LSI MegaRaid HBA's, overheating and one ugly hack

Some Tips about running a Dell PowerEdge Tower Server as your workstation