Posted on 10/23/2012 11:42:47 AM PDT by Red Badger
A dash of algebra on wireless networks promises to boost bandwidth tenfold, without new infrastructure.
Academic researchers have improved wireless bandwidth by an order of magnitudenot by adding base stations, tapping more spectrum, or cranking up transmitter wattage, but by using algebra to banish the network-clogging task of resending dropped packets.
By providing new ways for mobile devices to solve for missing data, the technology not only eliminates this wasteful process but also can seamlessly weave data streams from Wi-Fi and LTEa leap forward from other approaches that toggle back and forth. "Any IP network will benefit from this technology," says Sheau Ng, vice president for research and development at NBC Universal.
Several companies have licensed the underlying technology in recent months, but the details are subject to nondisclosure agreements, says Muriel Medard, a professor at MIT's Research Laboratory of Electronics and a leader in the effort. Elements of the technology were developed by researchers at MIT, the University of Porto in Portugal, Harvard University, Caltech, and Technical University of Munich. The licensing is being done through an MIT/Caltech startup called Code-On Technologies.
The underlying problem is huge and growing: on a typical day in Boston, for example, 3 percent of packets are dropped due to interference or congestion. Dropped packets cause delays in themselves, and then generate new back-and-forth network traffic to replace those packets, compounding the original problem.
The practical benefits of the technology, known as coded TCP, were seen on a recent test run on a New York-to-Boston Acela train, notorious for poor connectivity. Medard and students were able to watch blip-free YouTube videos while some other passengers struggled to get online. "They were asking us 'How did you do that?' and we said 'We're engineers!' " she jokes.
More rigorous lab studies have shown large benefits. Testing the system on Wi-Fi networks at MIT, where 2 percent of packets are typically lost, Medard's group found that a normal bandwidth of one megabit per second was boosted to 16 megabits per second. In a circumstance where losses were 5 percentcommon on a fast-moving trainthe method boosted bandwidth from 0.5 megabits per second to 13.5 megabits per second. In a situation with zero losses, there was little if any benefit, but loss-free wireless scenarios are rare.
Medard's work "is an important breakthrough that promises to significantly improve bandwidth and quality-of-experience for cellular data users experiencing poor signal coverage," says Dipankar "Ray" Raychaudhuri, director or the Winlab at Rutgers University (see "Pervasive Wireless"). He expects the technology to be widely deployed within two to three years.
To test the technology in the meantime, Medard's group set up proxy servers in the Amazon cloud. IP traffic was sent to Amazon, encoded, and then decoded as an application on phones. The benefit might be even better if the technology were built directly into transmitters and routers, she says. It also could be used to merge traffic coming over Wi-Fi and cell phone networks rather than forcing devices to switch between the two frequencies.
The technology transforms the way packets of data are sent. Instead of sending packets, it sends algebraic equations that describe series of packets. So if a packet goes missing, instead of asking the network to resend it, the receiving device can solve for the missing one itself. Since the equations involved are simple and linear, the processing load on a phone, router, or base station is negligible, Medard says.
Whether gains seen in the lab can be achieved in a full-scale deployment remains to be seen, but the fact that the improvements were so large suggests a breakthrough, says Ng, the NBC executive, who was not involved in the research. "In the lab, if you only find a small margin of improvement, the engineers will be skeptical. Looking at what they have done in the lab, it certainly is order-of-magnitude improvementand that certainly is very encouraging," Ng says.
If the technology works in large-scale deployments as expected, it could help forestall a spectrum crunch. Cisco Systems says that by 2016, mobile data traffic will grow 18-foldand Bell Labs goes farther, predicting growth by a factor of 25. The U.S. Federal Communications Commission has said spectrum could run out within a couple of years.
Medard stops short of saying the technology will prevent a spectrum crunch, but she notes that the current system is grossly inefficient. "Certainly there are very severe inefficiencies that should be remedied before you consider acquiring more resources," she says.
She says that when her group got online on the Acela, the YouTube video they watched was of college students playing a real-world version of the Angry Birds video game. "The quality of the video was good. The quality of the contentwe haven't solved," Medard says.
Tech Ping!...........
It’s not rocket science, it’s algebra.... ok it might as well be rocket science for me. :-)
Bwaahahahahaaaaa~ This multi channel capability on HDTV can now be unleashed to cram even more channels into the same old same old.
Basically it’s a way for the computer to make a SWAG as to what the missing packet is, based on what comes before and after............
a new statistical time multiplex?
I call it SWAG Communications.........
High school kid:
“Who needs to study algebra? I’ll never use it in the real world”.
Algebra: Arabic word for ‘Math is hard!’............
Interesting. Ping for later...
Seems like a “smart” FTP approach. Still gonna lose a lot of reliability, that TCP/IP gives, if you really need all the data bits but at least it should be good for streaming uses.
And here I spent two years of high school insisting to my parents that “there IS no practical use for Algebra!”
(that was before I studied radio engineering and learned that FM Stereo is basically a quadratic equation...DOH!)
I'd hate to have the transmission process "guessing" what the data should be when I'm downloading a program which requires every byte to be correct. While a minor blip in a youtube video is perfectly acceptable, many people transfer data that needs to be byte-for-byte identical to what's been sent.
There also has been a breakthrough on FFT. This will also have a bandwidth multiplying effect.
“The faster-than-fast Fourier transform”
http://web.mit.edu/newsoffice/2012/faster-fourier-transforms-0118.html
Thankfully more of it stuck in my brain than my grades in school would indicate.
It looks like they are just sending error correction coding across multiple packets. Those codes take up bandwidth by themselves, so in a situation where you lose few or no packets you will transmit your real data slower because of that overhead (assuming uncompressible data). At a certain percentage loss that will be acceptable because you gain more from not having to resend packets than you lose on overhead.
It would be really nice if the amount of error coding is dynamic so you can reduce it to a minimum in a low packet loss situation.
**** “And here I spent two years of high school insisting to my parents that there IS no practical use for Algebra! ****
I did the same thing and 3 months after Graduation I was playing Catch-up because my new Occupation required Trig and Calc
The Military really did open my eyes in more ways than I could have imagined... God Bless the United States Navy!
TT
A modification of simple “checksum” has been used in the past to detect and fill in for dropped bits - I believe that one system was able to detect and correct up to 3 bits in a 1024byte block, but my memory is fuzzy on this right now, and I am not going to look it up. Essentially that, and use of the 9th “parity” bit has been an available method since the early days of computing.
This sounds like it may be some modification of that. It would be nice to hear a few more details of how much fault it is able to detect&correct, what the overhead data sent, handshaking needed, etc.
some old sayings never lose their truth - such as “necessity is the mother of invention”
bottlenecks anywhere do not need government solutions (subsidizing telecom “infrastructure” development?)
they need understanding, R&D, private capital and open markets
without this latest technology the solutions to “bandwidth issues” could have even meant new and expanded infrastructure, but even then the solution that science and engineering, private capital and open markets came up with would not be any solution Obama and his ilk chose
repeat after Reagan - government IS the problem
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.