Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

A Bandwidth Breakthrough!
MIT Technology Review ^ | Tuesday, October 23, 2012 | David Talbot

Posted on 10/23/2012 11:42:47 AM PDT by Red Badger

A dash of algebra on wireless networks promises to boost bandwidth tenfold, without new infrastructure.

Academic researchers have improved wireless bandwidth by an order of magnitude—not by adding base stations, tapping more spectrum, or cranking up transmitter wattage, but by using algebra to banish the network-clogging task of resending dropped packets.

By providing new ways for mobile devices to solve for missing data, the technology not only eliminates this wasteful process but also can seamlessly weave data streams from Wi-Fi and LTE—a leap forward from other approaches that toggle back and forth. "Any IP network will benefit from this technology," says Sheau Ng, vice president for research and development at NBC Universal.

Several companies have licensed the underlying technology in recent months, but the details are subject to nondisclosure agreements, says Muriel Medard, a professor at MIT's Research Laboratory of Electronics and a leader in the effort. Elements of the technology were developed by researchers at MIT, the University of Porto in Portugal, Harvard University, Caltech, and Technical University of Munich. The licensing is being done through an MIT/Caltech startup called Code-On Technologies.

The underlying problem is huge and growing: on a typical day in Boston, for example, 3 percent of packets are dropped due to interference or congestion. Dropped packets cause delays in themselves, and then generate new back-and-forth network traffic to replace those packets, compounding the original problem.

The practical benefits of the technology, known as coded TCP, were seen on a recent test run on a New York-to-Boston Acela train, notorious for poor connectivity. Medard and students were able to watch blip-free YouTube videos while some other passengers struggled to get online. "They were asking us 'How did you do that?' and we said 'We're engineers!' " she jokes.

More rigorous lab studies have shown large benefits. Testing the system on Wi-Fi networks at MIT, where 2 percent of packets are typically lost, Medard's group found that a normal bandwidth of one megabit per second was boosted to 16 megabits per second. In a circumstance where losses were 5 percent—common on a fast-moving train—the method boosted bandwidth from 0.5 megabits per second to 13.5 megabits per second. In a situation with zero losses, there was little if any benefit, but loss-free wireless scenarios are rare.

Medard's work "is an important breakthrough that promises to significantly improve bandwidth and quality-of-experience for cellular data users experiencing poor signal coverage," says Dipankar "Ray" Raychaudhuri, director or the Winlab at Rutgers University (see "Pervasive Wireless"). He expects the technology to be widely deployed within two to three years.

To test the technology in the meantime, Medard's group set up proxy servers in the Amazon cloud. IP traffic was sent to Amazon, encoded, and then decoded as an application on phones. The benefit might be even better if the technology were built directly into transmitters and routers, she says. It also could be used to merge traffic coming over Wi-Fi and cell phone networks rather than forcing devices to switch between the two frequencies.

The technology transforms the way packets of data are sent. Instead of sending packets, it sends algebraic equations that describe series of packets. So if a packet goes missing, instead of asking the network to resend it, the receiving device can solve for the missing one itself. Since the equations involved are simple and linear, the processing load on a phone, router, or base station is negligible, Medard says.

Whether gains seen in the lab can be achieved in a full-scale deployment remains to be seen, but the fact that the improvements were so large suggests a breakthrough, says Ng, the NBC executive, who was not involved in the research. "In the lab, if you only find a small margin of improvement, the engineers will be skeptical. Looking at what they have done in the lab, it certainly is order-of-magnitude improvement—and that certainly is very encouraging," Ng says.

If the technology works in large-scale deployments as expected, it could help forestall a spectrum crunch. Cisco Systems says that by 2016, mobile data traffic will grow 18-fold—and Bell Labs goes farther, predicting growth by a factor of 25. The U.S. Federal Communications Commission has said spectrum could run out within a couple of years.

Medard stops short of saying the technology will prevent a spectrum crunch, but she notes that the current system is grossly inefficient. "Certainly there are very severe inefficiencies that should be remedied before you consider acquiring more resources," she says.

She says that when her group got online on the Acela, the YouTube video they watched was of college students playing a real-world version of the Angry Birds video game. "The quality of the video was good. The quality of the content—we haven't solved," Medard says.


TOPICS: Business/Economy; Culture/Society; Technical; Testing; US: Massachusetts
KEYWORDS: bandwidth; bandwith; communications; computers; electronics; internet

1 posted on 10/23/2012 11:42:47 AM PDT by Red Badger
[ Post Reply | Private Reply | View Replies]

To: ShadowAce

Tech Ping!...........


2 posted on 10/23/2012 11:43:10 AM PDT by Red Badger (Why yes, that was crude and uncalled for......That's why I said it..............)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Red Badger

It’s not rocket science, it’s algebra.... ok it might as well be rocket science for me. :-)


3 posted on 10/23/2012 11:44:43 AM PDT by Frapster (There you go again....)
[ Post Reply | Private Reply | To 2 | View Replies]

To: Red Badger
Cable is dead ~

Bwaahahahahaaaaa~ This multi channel capability on HDTV can now be unleashed to cram even more channels into the same old same old.

4 posted on 10/23/2012 11:47:34 AM PDT by muawiyah
[ Post Reply | Private Reply | To 1 | View Replies]

To: Frapster

Basically it’s a way for the computer to make a SWAG as to what the missing packet is, based on what comes before and after............


5 posted on 10/23/2012 11:48:48 AM PDT by Red Badger (Why yes, that was crude and uncalled for......That's why I said it..............)
[ Post Reply | Private Reply | To 3 | View Replies]

To: Red Badger

a new statistical time multiplex?


6 posted on 10/23/2012 11:48:52 AM PDT by ffusco (The President will return this country to what it once was...An arctic wasteland covered in ice.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: ffusco

I call it SWAG Communications.........


7 posted on 10/23/2012 11:50:20 AM PDT by Red Badger (Why yes, that was crude and uncalled for......That's why I said it..............)
[ Post Reply | Private Reply | To 6 | View Replies]

To: Red Badger

High school kid:

“Who needs to study algebra? I’ll never use it in the real world”.


8 posted on 10/23/2012 11:51:03 AM PDT by Signalman
[ Post Reply | Private Reply | To 1 | View Replies]

To: Signalman

Algebra: Arabic word for ‘Math is hard!’............


9 posted on 10/23/2012 11:52:04 AM PDT by Red Badger (Why yes, that was crude and uncalled for......That's why I said it..............)
[ Post Reply | Private Reply | To 8 | View Replies]

To: rdb3; Calvinist_Dark_Lord; Salo; JosephW; Only1choice____Freedom; amigatec; stylin_geek; ...

10 posted on 10/23/2012 11:53:04 AM PDT by ShadowAce (Linux -- The Ultimate Windows Service Pack)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Red Badger

Interesting. Ping for later...


11 posted on 10/23/2012 11:54:35 AM PDT by rlmorel (1793 French Jacobins and 2012 American Liberals have a lot in common.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Red Badger

Seems like a “smart” FTP approach. Still gonna lose a lot of reliability, that TCP/IP gives, if you really need all the data bits but at least it should be good for streaming uses.


12 posted on 10/23/2012 12:00:45 PM PDT by trebb (Allies no longer trust us. Enemies no longer fear us.)
[ Post Reply | Private Reply | To 5 | View Replies]

To: Red Badger

And here I spent two years of high school insisting to my parents that “there IS no practical use for Algebra!”

(that was before I studied radio engineering and learned that FM Stereo is basically a quadratic equation...DOH!)


13 posted on 10/23/2012 12:07:06 PM PDT by Buckeye McFrog
[ Post Reply | Private Reply | To 1 | View Replies]

To: trebb
Seems like a “smart” FTP approach. Still gonna lose a lot of reliability, that TCP/IP gives, if you really need all the data bits but at least it should be good for streaming uses.

I'd hate to have the transmission process "guessing" what the data should be when I'm downloading a program which requires every byte to be correct. While a minor blip in a youtube video is perfectly acceptable, many people transfer data that needs to be byte-for-byte identical to what's been sent.

14 posted on 10/23/2012 12:09:07 PM PDT by Bob
[ Post Reply | Private Reply | To 12 | View Replies]

To: Red Badger

There also has been a breakthrough on FFT. This will also have a bandwidth multiplying effect.

“The faster-than-fast Fourier transform”
http://web.mit.edu/newsoffice/2012/faster-fourier-transforms-0118.html


15 posted on 10/23/2012 12:13:19 PM PDT by Bobalu (It is not obama we are fighting, it is the media.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Buckeye McFrog
I thought I would get away with not being any good at math by being a carpenter. DOH!

Thankfully more of it stuck in my brain than my grades in school would indicate.

16 posted on 10/23/2012 12:17:26 PM PDT by TigersEye (dishonorabledisclosure.com - OPSEC (give them support))
[ Post Reply | Private Reply | To 13 | View Replies]

To: Red Badger
The technology transforms the way packets of data are sent. Instead of sending packets, it sends algebraic equations that describe series of packets. So if a packet goes missing, instead of asking the network to resend it, the receiving device can solve for the missing one itself. Since the equations involved are simple and linear, the processing load on a phone, router, or base station is negligible, Medard says.

It looks like they are just sending error correction coding across multiple packets. Those codes take up bandwidth by themselves, so in a situation where you lose few or no packets you will transmit your real data slower because of that overhead (assuming uncompressible data). At a certain percentage loss that will be acceptable because you gain more from not having to resend packets than you lose on overhead.

It would be really nice if the amount of error coding is dynamic so you can reduce it to a minimum in a low packet loss situation.

17 posted on 10/23/2012 12:18:29 PM PDT by KarlInOhio (Big Bird is a brood parasite: laid in our nest 43 years ago and we are still feeding him.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Buckeye McFrog

**** “And here I spent two years of high school insisting to my parents that “there IS no practical use for Algebra!” ****

I did the same thing and 3 months after Graduation I was playing Catch-up because my new Occupation required Trig and Calc

The Military really did open my eyes in more ways than I could have imagined... God Bless the United States Navy!

TT


18 posted on 10/23/2012 12:21:33 PM PDT by TexasTransplant (Radical islam is islam. Moderate islam is the Trojan Horse.)
[ Post Reply | Private Reply | To 13 | View Replies]

To: Red Badger

A modification of simple “checksum” has been used in the past to detect and fill in for dropped bits - I believe that one system was able to detect and correct up to 3 bits in a 1024byte block, but my memory is fuzzy on this right now, and I am not going to look it up. Essentially that, and use of the 9th “parity” bit has been an available method since the early days of computing.

This sounds like it may be some modification of that. It would be nice to hear a few more details of how much fault it is able to detect&correct, what the overhead data sent, handshaking needed, etc.


19 posted on 10/23/2012 12:22:46 PM PDT by AFPhys ((Praying for our troops, our citizens, that the Bible and Freedom become basis of the US law again))
[ Post Reply | Private Reply | To 5 | View Replies]

To: Red Badger

some old sayings never lose their truth - such as “necessity is the mother of invention”

bottlenecks anywhere do not need government solutions (subsidizing telecom “infrastructure” development?)

they need understanding, R&D, private capital and open markets

without this latest technology the solutions to “bandwidth issues” could have even meant new and expanded infrastructure, but even then the solution that science and engineering, private capital and open markets came up with would not be any solution Obama and his ilk chose

repeat after Reagan - government IS the problem


20 posted on 10/23/2012 12:35:59 PM PDT by Wuli
[ Post Reply | Private Reply | To 1 | View Replies]

To: Red Badger

Sounds like a RAID for a moving target.


21 posted on 10/23/2012 12:51:24 PM PDT by ImJustAnotherOkie (zerogottago)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Buckeye McFrog

Yeah, a little algebra following a space transformation... Leads the public to believe it’s as easy as figuring out which train got to New York first. Well, it’s not like the general public can solve that problem either. No point in confusing them with the details! LOL


22 posted on 10/23/2012 1:08:49 PM PDT by chaos_5
[ Post Reply | Private Reply | To 13 | View Replies]

To: Red Badger
Several companies have licensed the underlying technology in recent months,

It's good to see claims of technological improvement actually make it to market, rather than promising market availability several years in the future.

23 posted on 10/23/2012 1:09:49 PM PDT by Moonman62 (The US has become a government with a country, rather than a country with a government.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Red Badger
In a circumstance where losses were 5 percent—common on a fast-moving train—the method boosted bandwidth from 0.5 megabits per second to 13.5 megabits per second. In a situation with zero losses, there was little if any benefit, but loss-free wireless scenarios are rare.

Keep increasing the speed until you get packet loss.

24 posted on 10/23/2012 1:10:49 PM PDT by Moonman62 (The US has become a government with a country, rather than a country with a government.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Red Badger

Bookmark


25 posted on 10/23/2012 1:57:48 PM PDT by IronJack (=)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Red Badger

Bookmark


26 posted on 10/23/2012 2:01:31 PM PDT by IronJack (=)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Red Badger
Basically it’s a way for the computer to make a SWAG as to what the missing packet is, based on what comes before and after............

That's not all that useful. All it will do is improve youtube and the downloading of... porn....

.....OMG THIS IS THE MOST IMPORTANT INNOVATION IN THE HISTORY OF MAN!!!!!

27 posted on 10/23/2012 2:09:57 PM PDT by Lazamataz (The Pravda Press has gone from 'biased' straight on through to 'utterly bizarre'.)
[ Post Reply | Private Reply | To 5 | View Replies]

To: Red Badger

Sounds like Forward Error Correction (FEC) code


28 posted on 10/23/2012 2:18:35 PM PDT by Bruce Kurtz
[ Post Reply | Private Reply | To 1 | View Replies]

To: Red Badger

bkmk


29 posted on 10/23/2012 2:22:15 PM PDT by Sergio (An object at rest cannot be stopped! - The Evil Midnight Bomber What Bombs at Midnight)
[ Post Reply | Private Reply | To 1 | View Replies]

To: KarlInOhio

When we send and Excel spreadsheet with pivot tables we often get new blanks and new lines of what seem to be / appear to be the same sort words or data. I have wondered if these are transmission errors?


30 posted on 10/23/2012 4:38:25 PM PDT by Sequoyah101
[ Post Reply | Private Reply | To 17 | View Replies]

To: Red Badger

My casual take on this after reviewing some of their published work is it’s founded on erasure codes. This is funny because I think there’s already products for the wired tcp universe that work the same way, plus storage implementations too.


31 posted on 10/23/2012 4:47:27 PM PDT by no-s (when democracy is displaced by tyranny, the armed citizen still gets to vote)
[ Post Reply | Private Reply | To 1 | View Replies]

To: no-s

I was looking for details but the article was just generalities for public consumption. Big fan of various codes including RS, golay, and convolutional, having implimented/used them in some of my homebrew projects.


32 posted on 10/24/2012 1:09:16 AM PDT by SpaceBar
[ Post Reply | Private Reply | To 31 | View Replies]

To: SpaceBar
...Big fan of various codes including RS, golay, and convolutional, having implimented/used them in some of my homebrew projects.

heheh there just went 45 minutes of guilty pleasure see Modeling Network Coded TCP.

33 posted on 10/24/2012 2:51:49 AM PDT by no-s (when democracy is displaced by tyranny, the armed citizen still gets to vote)
[ Post Reply | Private Reply | To 32 | View Replies]

To: Red Badger

So much in computing boils down to an encoding in some way or another.


34 posted on 10/24/2012 4:47:16 AM PDT by 2 Kool 2 Be 4-Gotten
[ Post Reply | Private Reply | To 1 | View Replies]

To: SpaceBar

I think this quote cuts through the hype:

” In a situation with zero losses, there was little if any benefit, but loss-free wireless scenarios are rare.”

loss-free scenarios are not rare, They are entirely predictable, and they just require higher signal strength

Error correction codes are already part of the data link layer.

What they imply (order of magnitude increase) is a violation of Nyquist’s Law.

What they are doing will not solve plain old congestion problems.

I suspect that they are using normal packet transmission overhead in a different way that includes error correction elements. I could see some level of improvement possible - mainly by managing flow-control differently.

Here is the fundamental theory issue: Typical Digital Networks perform consistently and then “fall off a brick wall”. If you can stave off the brick wall with a bit of low-overhead error-correction, you might be able to measure a significant increase in performance (10x even) at the signal ‘brick wall’.

This may increase cell coverage for a specific link scenario (fringe) a little bit, but will not increase designed capacity or any other meaningful measure of a well-designed network - per their quote at the beginning of this post.


35 posted on 10/24/2012 5:07:56 AM PDT by RFEngineer
[ Post Reply | Private Reply | To 32 | View Replies]

To: RFEngineer

I was thinking something like a hybrid FEC scheme with an ACK/NACK fallback, or even variable code robustness similar to the latest versions of PACTOR which estimate the channel S/N and adjust accordingly.


36 posted on 10/24/2012 5:21:52 AM PDT by SpaceBar
[ Post Reply | Private Reply | To 35 | View Replies]

To: Red Badger

Sounds more like RAID for packets. Ergo there will be a reduction in useable data per packet to provide the redundancy for ONE lost packet in the sequence. And if more than one packet per sequence is lost does the whole sequence need to be retrans-ed? Likely, and that will increase network congestion proportional to what would supposedly be “saved”.

And they aren’t increasing bandwidth, rather the circuit is just being used closer to its errorless rate. They had to use a really crummy network to show its value otherwise it only adds to network congestion. How? By sending less useable information per packet, which in turn produces more packets required per information transfer which = more traffic. I wonder if any of those other riders considered them to be bandwidth hogs taking bandwidth needed for them to access their network? For a YouTube of college students playing Angry Birds?

Anyone who uses a solution which requires reducing information per packet without reducing the packet size really needs a very strong (read: security) justification to do so as it degrades network performance for all users.

I hope this so called solution requires FCC licensing. It is certainly not an elegant or egalitarian solution as it increases bandwidth usage per information transfer.

Why not integrate some of the better network accelerator technologies into wireless devices instead? At least they don’t reduce the information bytes per packet.

So what’s next, S-ing around with MTU sizes? /s


37 posted on 10/24/2012 5:34:56 AM PDT by Justa
[ Post Reply | Private Reply | To 5 | View Replies]

To: SpaceBar

The data link already does some of this stuff. At the packet level (without changing the network out) as I’m sure you know, you have some fixed overhead. Each packet has a checksum to determine if the data has been transmitted correctly.

In a typical network, if the checksum is bad, you throw out the whole packet. This has to encode the data in some FEC-like way and try to extract useful data from bad packets instead of requesting a retransmission. That’s about the only way to get any sort of performance enhancement here - you don’t wait for a retransmit, you don’t retransmit bad packets - and you perhaps trade the FEC overhead for a larger packet size (more efficient) to compensate for the extra overhead. That should be a deterministic problem - and would give you a slight performance advantage for congested networks and fringe coverage areas. 10x is hype only possible in selective scenarios - still, it’s not nothing but it’s not a panacea. You still have to have a well designed physical layer - as always.


38 posted on 10/24/2012 5:43:54 AM PDT by RFEngineer
[ Post Reply | Private Reply | To 36 | View Replies]

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson