This doesn't make sense to me. They're saying a "task" takes normal flash 1000 microseconds -- a millisecond.
But I have consumer USB flash drives that can read and write at tens of MB per second (that's a fraction of a microsecond per byte). And there are enterprise grade SATA SSD (solid-state drives) that will do hundreds of MB per second. That's continuous data rate, not burst.
So what is a "task" by IBM's definition? A write of a MB of data?
No idea. I was wondering the same thing.
My WAG is that they're saying that a standard task "Flip this array of bits from 0 to 1" is 100 times faster.
The amount of data processed over time is a meaningless metric because of parallel processing.
I also think that they may be using "100 times faster" a bit fast and loose. Each individual memory cell is faster, certainly but because it compacts more data in the same amount of space, they're using that as a multiplication factor. I.E. The actual response time may really be 12.5 times faster but because they're changing or reading 8 bits(as opposed to 1 in standard memory), it's equal to 100 times.
The take away is that if it can perform as if it's 100 times faster, we're looking a drastic improvement in instant on and responsiveness. Hard disks are on the way out now anyway and everything will be solid state at some point even without this discovery. There are certain things that disks do well, like long term storage, but in 10+ years I don't believe hard disks will be in common usage except in old equipment.