We did it 30 years ago. There were four of them, and it boosted the speed 4X over a single one. But, even mainframe disk devices were really slow back then.
However, it's expensive. And you can accomplish the same thing with RAID-0 and multiple drives. A properly implemented RAID driver will read all the drives in parallel and assemble the results into a single data stream. Of course, each disk device has to be on independent SATA/SCSI/IDE channels.
The truth is that for most people, the seek and rotational latency is what really limits throughput. Seek latency is waiting for the head to move into position, and averages around 9 milliseconds. Rotational latency is waiting for the disk to spin into position, and averages 4.2 milliseconds for a 7200 RPM drive.
Compare that to how long it takes to actually move data. Drives with the latest SATA 6.0 Gbit/sec interface can sustain about 150 MByte/sec, but that is limited by the density of the bits on the drive platter.
The cluster size on your Windows NTFS file system isn't any larger than 32 kilobytes, and is likely smaller than that. Both the operating system and the disk drive do some read-ahead caching and will read more than you request at one time, but let's use 32K as an example. At a rate of 150 MByte/sec, a 32 KByte transfer would require only about 200 microseconds, or about 20 times the average rotational latency. You would have to read about 630 KBytes at one time to just split the average rotational latency time and transfer time 50/50, and that's not even considering the seek latency.
Are you seeing the problem? Very few people read a huge amount of data at one time, repeatedly, and would benefit from a faster sustained transfer rate. And those that do can construct a RAID array much cheaper.
Small random reads are much more common, especially on a computer with a virtual memory system. And that's why operating systems cache disk data in unused RAM, and even individual disks read-ahead and store data in their own internal cache RAM (which are about 32 MBytes these days). When you request that data, it gives you the cached copy instead, avoiding the seek and rotational latency delays.
Well, then its the seek time that is the problem and the value of the achievement this article announces is not what they claim. So do reduce seek time maybe the strategy should be to increase number of platters, decrease platter diameter, drastically increase the power of the hard drive motor, and dramatically increase the hard drive’s on board cache...or go the route of the hybrid SSD.
Thanks for the information/lesson. I honestly enjoyed reading it.