I think this quote cuts through the hype:
” In a situation with zero losses, there was little if any benefit, but loss-free wireless scenarios are rare.”
loss-free scenarios are not rare, They are entirely predictable, and they just require higher signal strength
Error correction codes are already part of the data link layer.
What they imply (order of magnitude increase) is a violation of Nyquist’s Law.
What they are doing will not solve plain old congestion problems.
I suspect that they are using normal packet transmission overhead in a different way that includes error correction elements. I could see some level of improvement possible - mainly by managing flow-control differently.
Here is the fundamental theory issue: Typical Digital Networks perform consistently and then “fall off a brick wall”. If you can stave off the brick wall with a bit of low-overhead error-correction, you might be able to measure a significant increase in performance (10x even) at the signal ‘brick wall’.
This may increase cell coverage for a specific link scenario (fringe) a little bit, but will not increase designed capacity or any other meaningful measure of a well-designed network - per their quote at the beginning of this post.
I was thinking something like a hybrid FEC scheme with an ACK/NACK fallback, or even variable code robustness similar to the latest versions of PACTOR which estimate the channel S/N and adjust accordingly.