Initial Results

With the first run completed, let's take a look at the results. First of all, disk read and write performance. In the last post, 'dd' reported write performance of 274MB/s and read performance of 287MB/s. Whilst these are nowhere near what the SSD is capable of, they are right at the limit of the SATA2 controller it is connected to. SATA2 has a theoretical limit of 300MB/s, but this has to be used for control and framing as well as data, so it looks like we are probably running the link at its practical limit. Later in the testing we will move the SSD to another server with a SATA3 controller and check what performance we get then. We can verify the link loading with 'iostat', a great utility for seeing what is going on with Linux disk I/O:

# iostat -ctx 1
...
Device:     r/s     w/s     rsec/s     wsec/s  avgrq-sz   %util
sdb        0.00  524.00       0.00  536576.00   1024.00  100.00
...
sdb      548.00    0.00  561152.00       0.00   1024.00  100.00

In the first line of stats, we see that utilisation is 100% - the link is fully loaded as we thought. And we can calculate the rate that data is being written: 524 (writes per second) * 1024 (average request size) * 512 (bytes per sector) = 274.7MB/s - just as 'dd' reported. The same holds true for the second line of stats, which covers read throughput.

Now let's take a look at the SMART attributes from the completion of the first run:

ID# ATTRIBUTE_NAME          VALUE WORST THRESH TYPE      RAW_VALUE
  5 Reallocated_Sector_Ct   100   100   010    Pre-fail  0
  9 Power_On_Hours          099   099   000    Old_age   437
 12 Power_Cycle_Count       099   099   000    Old_age   158
177 Wear_Leveling_Count     099   099   000    Pre-fail  8
179 Used_Rsvd_Blk_Cnt_Tot   100   100   010    Pre-fail  0
181 Program_Fail_Cnt_Total  100   100   010    Old_age   0
182 Erase_Fail_Count_Total  100   100   010    Old_age   0
183 Runtime_Bad_Block       100   100   010    Pre-fail  0
187 Reported_Uncorrect      100   100   000    Old_age   0
190 Airflow_Temperature_Cel 065   060   000    Old_age   35
195 Hardware_ECC_Recovered  200   200   000    Old_age   0
199 UDMA_CRC_Error_Count    100   100   000    Old_age   0
235 Unknown_Attribute       099   099   000    Old_age   26
241 Total_LBAs_Written      099   099   000    Old_age   18504565140

Comparing this with the stats from before the run, we can see that the wear levelling count has increased by 1 (more on this in a later post) and the total LBAs written has gone up significantly. We would expect the LBAs (sectors) written count to be 869GiB plus a tiny bit for meta-data updates to the filing system.  Doing the calculation, we see that (18504565140 - 16682136274) * 512 / (1024 * 1024 * 1024) = 869.002GiB - spot on!

Now all that remains is to set the test running automatically and wait for something interesting to happen…