I was thinking something like a hybrid FEC scheme with an ACK/NACK fallback, or even variable code robustness similar to the latest versions of PACTOR which estimate the channel S/N and adjust accordingly.
The data link already does some of this stuff. At the packet level (without changing the network out) as I’m sure you know, you have some fixed overhead. Each packet has a checksum to determine if the data has been transmitted correctly.
In a typical network, if the checksum is bad, you throw out the whole packet. This has to encode the data in some FEC-like way and try to extract useful data from bad packets instead of requesting a retransmission. That’s about the only way to get any sort of performance enhancement here - you don’t wait for a retransmit, you don’t retransmit bad packets - and you perhaps trade the FEC overhead for a larger packet size (more efficient) to compensate for the extra overhead. That should be a deterministic problem - and would give you a slight performance advantage for congested networks and fringe coverage areas. 10x is hype only possible in selective scenarios - still, it’s not nothing but it’s not a panacea. You still have to have a well designed physical layer - as always.