Posted on 03/15/2004 9:42:22 AM PST by Future Snake Eater
Researchers in North Carolina State Universitys Department of Computer Science have developed a new data transfer protocol for the Internet that makes todays high-speed Digital Subscriber Line (DSL) connections seem lethargic.
The protocol is named BIC-TCP, which stands for Binary Increase Congestion Transmission Control Protocol. In a recent comparative study run by the Stanford Linear Accelerator Center (SLAC), BIC consistently topped the rankings in a set of experiments that determined its stability, scalability and fairness in comparison with other protocols. The study tested six other protocols developed by researchers from schools around the world, including the California Institute of Technology and the University College of London.
Dr. Injong Rhee, associate professor of computer science, said BIC can achieve speeds roughly 6,000 times that of DSL and 150,000 times that of current modems. While this might translate into music downloads in the blink of an eye, the true value of such a super-powered protocol is a real eye-opener.
Rhee and NC State colleagues Dr. Khaled Harfoush, assistant professor of computer science, and Lisong Xu, postdoctoral student, presented a paper on their findings in Hong Kong at Infocom 2004, the 23rd meeting of the Institution of Electrical and Electronics Engineers Communications Society, on Thursday, March 11.
Many national and international computing labs are now involved in large-scale scientific studies of nuclear and high-energy physics, astronomy, geology and meteorology. Typically, Rhee said, Data are collected at a remote location and need to be shipped to labs where scientists can perform analyses and create high-performance visualizations of the data. Visualizations might include satellite images or climate models used in weather predictions. Receiving the data and sharing the results can lead to massive congestion of current networks, even on the newest wide-area high-speed networks such as ESNet (Energy Sciences Network), which was created by the U.S. Department of Energy specifically for these types of scientific collaborations.
The problem, Rhee said, is the inherent limitations of regular TCP. TCP was originally designed in the 1980s when Internet speeds were much slower and bandwidths much smaller, he said. Now we are trying to apply it to networks that have several orders of magnitude more available bandwidth. Essentially, were using an eyedropper to fill a water main. BIC, on the other hand, would open the floodgate.
Along with postdoctoral student Xu, Rhee has been working on developing BIC for the past year, although Rhee said he has been researching network congestion solutions for at least a decade. The key to BICs speed is that it uses a binary search approach a fairly common way to search databases that allows for rapid detection of maximum network capacities with minimal loss of information. What takes TCP two hours to determine, BIC can do in less than one second, Rhee said. The greatest challenge for the new protocol, he added, was to fill the pipe fast without starving out other protocols. Its a tough balance, he said.
By allowing the rapid transfer of increasingly large packets of information over long distances, the new protocol could boost the efficacy of cutting-edge applications ranging from telemedicine and real-time environmental monitoring to business operations and multi-user gaming. At NC State, researchers could more readily visualize, monitor and control real-time simulations and experiments conducted at remote computing clusters. BIC might even help avoid a national disaster: The recent blackout that affected large areas of the eastern United States and Canada underscored the need to spread data-rich backup systems across hundreds of thousands of miles.
With network speeds doubling roughly annually, Rhee said the performances demonstrated by the new protocol could become commonly available in the next few years, setting a new standard for full utilization of the Internet.
That's about what I get at home too.
Then I bought cable access for a business. What a ripoff! They charge twice as much for the minimum business service for much less bandwidth. All they do is clamp it and try to extort more money. I told the bastards cable company to rip it out and then I got DSL. It gets about 25% of my cable speed at home at about the same cost.
Some of the local daycare centers have their rooms wired for cameras so parants can check in all day and see what's going on with their kids.
It won't be long before we can glue a camera on a baby at birth and record his entire life as it unfolds.
The RTT approach was invented to solve communications between to LAN segments that are bridged by slower facility. The RTT algorithm tweaks a self-pacing aspect of TCP to keep the data moving at the optimal pace for the end-to-end connection.
Mike Karels also made some related contributions to the bridged LAN scenario. The maximum transmission unit (MTU) is typically 1500 bytes on an Ethernet. The old 56 kb lines of the Arpanet had smaller MTU values. That required fragmentation of the IP packets enroute ane reassembly at the destination. The frag/reassemble logic kills throughput. Mike approached the problem by doing a binary backoff of the MTU until throughput improved. The improved throughput was an implicit indication that the MTU had been adjusted downward enough to stop the IP level fragging.
This new protocol seems to be aimed at solving the inverse problem that RTT addressed. Two LAN segments connected by a much faster facility. I question whether this really belongs in the guts of TCP, or is more appropriately addressed in router equipment at the edge of a LAN. The large Ethernet packet (65 kbytes) work that Van Jacobson did on supercomputers was a real boon to the world of people moving massive amounts of data.
As far as I know, the typical low-grade webcam service you mentioned should be possible with any DSL or cable service; many or most of the variables hindering that scenario are outside your (and your provider's) control.
If both cable and DSL are available in your area, I'd think competition would cause one or both of them to be running a free installation promotion. But, watch out for any minimum length of service clause so that, if you don't get satisfactory results, you can drop their service without a retroactive or pro-rated installation fee.
As for cable, Best Buy et. al. often sell free-after-rebate cable modems, when you sign up for service (nearly free for existing customers). Ask your cable provider whether the specific model will be compatible and how much that'll save you ($5 or $10/mo).
The only drawback is that Windows 2028 will have things slowed down to a crawl, but Bill Gates IV promises that the upcoming Windows 3031 will have significant improvements in speed, taking full advantage of the 512-bit processors of the day. Also, Windows 3031 is expected to have security enhancements as well.
I don't know where you are or who your provider is, but around here Verizon doesn't guarantee squat for business DSL, either in term of uptime or in terms of bandwidth - and I'll bet money that they're typical in that respect among other DSL/cable providers. If yours does, I would say it is by far the exception and not the rule. Mostly, if you need SLA's, you can pretty much forget business-class DSL or cable, and skip right to fractional T1...
I was on a fractional T1 a few years ago at a small office of a very large company (some contractual requirement). It cost over $700/month! Eventually the price went down to $249 or $429/mo or something. And what a pain it was when it went down - you never got the same tech twice and none of them had ever seen it before! It's replaced now with a single cable connection.
This is not to say that there's no place for business-class DSL or cable in the world - I'm sure there are lots of people it's perfectly suited for - but there are trade-offs that come with that low price. If you just need it for interoffice email and internet access for your users, then it's probably just fine. If you're running your own e-commerce business through your own dedicated servers, such that every second of downtime costs you money for lost sales, or something like that...you may want to skip the cable connection ;)
That's about it there.
They all belong to the CWA and the company was a monopoly (the worst of both worlds). Bellsouth was our vendor and we were one of their biggest customers. It was tough even for us to find good techs, and we had to constantly hassle management to get access to the techs that knew what they were doing. And when a new technology was involved, even that didn't work. We usually had to tell them how to do their jobs.
My DSL is spec'ed at 6 Mbit/sec downstream, and I'm observing 5 Mbit/sec downstream in tests.
What are you getting from your lame cable modem? :-)
Besides, I'm guessing you can't do that for 40 bucks a month. :-P
What DSL provider are you using??? Bellsouth only offers 1.5 Mbps. My cable modem is 3 Mbps down and 256 Kbps up. I routinely get 2.8 to 2.9 Mbps download speed. I'm a happy camper, especially when I download beasties like MikTEX (230 MB file).
One misleading claim that dsl supporters make is that dsl doesn't share bandwidth. That's true only for the line from the dslam to your house. Upstream of that point, the bandwidth is shared.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.