Posted on 10/23/2012 11:42:47 AM PDT by Red Badger
Tech Ping!...........
It’s not rocket science, it’s algebra.... ok it might as well be rocket science for me. :-)
Bwaahahahahaaaaa~ This multi channel capability on HDTV can now be unleashed to cram even more channels into the same old same old.
Basically it’s a way for the computer to make a SWAG as to what the missing packet is, based on what comes before and after............
a new statistical time multiplex?
I call it SWAG Communications.........
High school kid:
“Who needs to study algebra? I’ll never use it in the real world”.
Algebra: Arabic word for ‘Math is hard!’............
Interesting. Ping for later...
Seems like a “smart” FTP approach. Still gonna lose a lot of reliability, that TCP/IP gives, if you really need all the data bits but at least it should be good for streaming uses.
And here I spent two years of high school insisting to my parents that “there IS no practical use for Algebra!”
(that was before I studied radio engineering and learned that FM Stereo is basically a quadratic equation...DOH!)
I'd hate to have the transmission process "guessing" what the data should be when I'm downloading a program which requires every byte to be correct. While a minor blip in a youtube video is perfectly acceptable, many people transfer data that needs to be byte-for-byte identical to what's been sent.
There also has been a breakthrough on FFT. This will also have a bandwidth multiplying effect.
“The faster-than-fast Fourier transform”
http://web.mit.edu/newsoffice/2012/faster-fourier-transforms-0118.html
Thankfully more of it stuck in my brain than my grades in school would indicate.
It looks like they are just sending error correction coding across multiple packets. Those codes take up bandwidth by themselves, so in a situation where you lose few or no packets you will transmit your real data slower because of that overhead (assuming uncompressible data). At a certain percentage loss that will be acceptable because you gain more from not having to resend packets than you lose on overhead.
It would be really nice if the amount of error coding is dynamic so you can reduce it to a minimum in a low packet loss situation.
**** “And here I spent two years of high school insisting to my parents that there IS no practical use for Algebra! ****
I did the same thing and 3 months after Graduation I was playing Catch-up because my new Occupation required Trig and Calc
The Military really did open my eyes in more ways than I could have imagined... God Bless the United States Navy!
TT
A modification of simple “checksum” has been used in the past to detect and fill in for dropped bits - I believe that one system was able to detect and correct up to 3 bits in a 1024byte block, but my memory is fuzzy on this right now, and I am not going to look it up. Essentially that, and use of the 9th “parity” bit has been an available method since the early days of computing.
This sounds like it may be some modification of that. It would be nice to hear a few more details of how much fault it is able to detect&correct, what the overhead data sent, handshaking needed, etc.
some old sayings never lose their truth - such as “necessity is the mother of invention”
bottlenecks anywhere do not need government solutions (subsidizing telecom “infrastructure” development?)
they need understanding, R&D, private capital and open markets
without this latest technology the solutions to “bandwidth issues” could have even meant new and expanded infrastructure, but even then the solution that science and engineering, private capital and open markets came up with would not be any solution Obama and his ilk chose
repeat after Reagan - government IS the problem
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.