![]() In terms of throughput, the Red is only 16.7 percent slower than its non-SMR Ironwolf competition. Once again, at first glance, the WD Red passes muster. The next thing to check on was whether the EFAX would handle a heavy version of the typical day-to-day use case of a consumer NAS well-that is, storing large files. It also seemed likely that an SMR disk full of data might perform worse than a brand-new one, which wouldn't need to read-modify-write as it dealt with already-used zones.Ĭlearly, the WD Red's firmware was up to the challenge of handling a conventional RAID rebuild, which amounts to an enormous, very large block sequential write test. We felt it was important to test both ways, since each case is a common use of NAS disks in the real world. This gave us our two test cases-a factory-new Red SMR disk being rebuilt into an array, and a used Red SMR disk with a lot of data on it already being rebuilt into an array. Then once it had finished rebuilding, we failed it out again, wipefs -a'd the RAID header from it, and added it back in to rebuild a second time. First, we fed the entire 4TB Red to the degraded array as a replacement for the missing, partitioned Ironwolf. Once the Ironwolf had successfully rebuilt into the array, we failed it out again-and this time, we removed it from the system entirely and replaced it with our 4TB Red SMR guinea-pig. At this point, we failed one Ironwolf disk out of the array, did a wipefs -a /dev/sdl1 on it to remove the existing RAID headers, then added it back into the now-degraded array. This brought the array to a little more than 75 percent used. After formatting the new eight disk, 19TiB array, we dumped 14TiB of data onto it in fourteen subdirectories, each containing 1,024 1GiB files filled with pseudo-random data.
0 Comments
Leave a Reply. |