If memory serves, that was due to the Morris worm.
Personally, I think a lot of net.congestion could be eliminated if we simply lopped off China from the 'net. All they do is use it to attack our defense and infrastructure networks anyway. Screw 'em.
Please.....P2P isn’t half the problem that rampant spammers are.
How do sites like YouTube stream video? Are they opening up multiple TCP ports to the client like a P2P connection?
Put a price on the users' total bandwidth usage (and allow them an easy way to track it) and suddenly people won't find it so desireable to download and host hundreds of movies and songs.
TCP in not unfair. Every user of TCP/IP has the same tool available to them. Every ISP has access to the same equipment and the same bandwidth restriction or queuing techniques.
ISPs use different types and levels of queuing to create the congestion environment and base traffic load in the manner that best fits their desired customer base.
Other technical elements that have not been discussed:
1) An end user is limited by their connection speed. User A above, if connected by ISDN, is still limited to ~120K of bandwidth REGARDLESS of how many streams are opened.
2) ISPs can and sometimes do, throttle bandwidth to match contracts. As part of these contracts a “burst” of traffic is allowed. However, these bursts are regulated and when the burst becomes sustained, the tokens that allow for bursting run out and the user is throttled back to their original speed. This is the MOST COMMON reason P2P are throttled. It is also the largest cause for complaints.
3) P2P networks can use other protocols other than TCP. IPSec tunnels, UDP, and other protocols are all possible. So “fixing” TCP will not necessarily fix the problem as P2P software will simply migrate to other protocols but still take up the same, OR MORE, bandwidth
4) Most congestion control techniques (throttling) do not even kick in until a congestion event (router interface overrun) occurs. This so rarely happens in the core of a carrier's network that most cores do not even implement congestion control. The most likely place for congestion to occur is at the edge of the network - that "last mile" connection device.
5) Dropping packets is a natural part of a burst of too much traffic hitting an interface at a moment in time. So much so that there is a lag on the router because it takes a little longer to decide where and then forward the traffic to the correct interface. To help reduce this, carriers implement buffers that provide a small pool of packets that can be handled before the router has to start dropping packets. However, in a congestion event, routers have the ability to selectively remove "drop" packets out of their buffers. This helps by allowing the router to drop packets of less importance and thus maintain service to either more important traffic or across a wider set of users.
At issue is ... what is important or fair? If you just use random packet drop in the example the article presents, then the P2P users is many times more likely to suffer a packet drop on one of their streams and thus throttle their traffic. If you drop the P2P traffic first, or more frequently, then other users, who are not using P2P, will continue their traffic and their packets will receive "full service". Once their packets get out of the way, then the P2P packets will resume.
This is a POLICY / COMMERCIAL decision. There are many other means of either dropping packets or restricting bandwidth. Focusing on one protocol to address one tiny issue that is a POLICY issue, not a technical one is a fool's errand.
Interesting read, but this takes control out of TCP and puts it at the application level. Which strikes me as begging for abuse. Who thinks the same people who realized the obvious way around AIMD, aren't going to realize the obvious way around this?