It depends on the size of data you’re transmitting. Large files, like blu ray or 4K rips transfer in a fraction of the time that they used to. Lots of small files will have less of an impact because each file copy will have the same amount of overhead. But there should be less backup/buffering.
Very good. Basically, when you ask for, say the first 100 bytes of a file, the operating system looks at the File System Cluster Size say 4096. It brings into Main Memory Cluster 0 of your file, 4096 Bytes, then gives you your 100 bytes that you asked for. When you ask of bytes 200-300, it's already in main memory and doesn't have to fetch it from the drive. When you ask for bytes 4097-4197, it fetches Cluster(1) 4096 more bytes of the drive.
Higher caching can override this, but basically this is also how Network I/O works.
You ask for a file to be Network transferred and it gets Cluster 0 into main memory, also say 4096 bytes, the driver then compares this with the MTU size, say 1500 and FRAGMENTS the Cluster into three Fragments 1-1500 bytes, 2-1500 bytes, 3-1096 bytes.
On the receiving end, those Fragments have to collected, and reassembled from those 3 fragments into Cluster 0 say of a similar File System and posted down to the drive or device according to it's sizing and requirements.
Now, Network Cards have gotten much more sophisticated, Smarter and can do much work CPU0 used to have to do.
But you get it don't you? There is much going on underneath, than what most folks know; simply because most devices just can't take that much data, all at once, and deal with it.
When I compare Jumbo Size and File System Cluster size it explains much.