By Suzanne Ross
What do cows and the Internet have in common?
In the days when hunter-gatherers first decided to stay put and become farmers, they often stuck together in villages for protection from marauding gangsters known alternately as Huns, Vikings, Celts, or “them.” As a result, cow grazing areas were limited to common lands that could be easily defended.
Spotlight: Event Series
The only problem with this scheme (now called the Tragedy of the Commons) was that everyone thought their own cows had more right to graze the common lands then their neighbors’ cows, and no one wanted to think about how cow congestion led to poorer grazing and thinner cows for everyone. If one person added another cow, even though it would increase the load on the grazing lands and reduce overall quality, it would still give that individual a higher percentage of cows.
The same is true for the Internet. More and more users want more and more bandwidth, they want it now, they want it fast, and they won’t tolerate a few seconds delay. Their streaming video is more important than the next guy’s executable file download. The resulting congestion is annoying to everyone, yet no one is willing to give up their piece of turf. How can we get everyone to share?
One of the solutions is to add more pipes, more servers, and more technology. But, as any one who has ever retired or tried to build a freeway knows, tasks and cars expand to fill whatever time or space you have. The same is true of bandwidth. Technology users will probably always expand to fill the available bandwidth, and we’ll still have the same congestion.
Peter Key, Richard Black and other researchers from Microsoft Research in Cambridge, UK, have implemented a model to reduce network congestion that follows an economic model developed to avoid the tragedy of the commons. It’s called “congestion pricing” or “resource pricing.”
This model, which several researchers have thought about applying to networking since the early nineties, is more needed than ever before. The proposal is to add feedback signals to network communications, allowing a weighted fair sharing of resources. Congestion pricing is often used by city planners (for cars instead of cows) to prevent freeway gridlock. Under this model, car owners have to pay to drive their cars at peak hours. The hope behind this system is to motivate drivers to take to the roads at non-peak hours, or to ride-share or take public transportation.
The network model doesn’t demand a monetary return for peak access, but could give greater weight to a pre-determined set of applications or users that would change depending on need. For instance, if streaming videos accessed by person A had preference over Web surfing by person B, it would receive a proportionally greater bandwidth share, but if person A wasn’t accessing a video, then person B’s surfing might get a greater share of the bandwidth over A’s file download. Or, on an individual machine, Windows updates could be given a lower value than email traffic or Web surfing. That way, the update would automatically run when the user was not engaged in those activities; say, at two in the morning.
The Problem with TCP Black explains the problem with the current network protocol. “TCP is a very good protocol in many respects, but it reacts to network behavior mostly by driving the network to the point where the network is congested, and then it’s able to observe that congestion and that causes it to reduce the loads it puts on the network.”
While the network is observing the congestion, something happens that reduces the quality of customer service. Queuing delays lead to lost packets, which leads to lost data. For some applications this isn’t a big problem. For others, such as streaming video, it means that the picture will break up or freeze.
“The basic idea is that if there is a resource that becomes overloaded then you can think of that as its ‘price’ going up,” explains Key, “because if a lot of people share a resource then they have an effect on each other, and they’ve then got an incentive to cooperate, so that you can share that resource out fairly. The resources can be elements of a computer system, it could be memory, it could be battery power, but we’re particularly looking at networks and bandwidth. How can you get applications that run across the network to share bandwidth? By telling them about the current price, and then letting them react accordingly.”
“One of the advantages of a system like this is that once you have sort of bought into the economic model it becomes easier to understand how to do relative shares of the network so you can say that this particular activity should have approximately twice as much weight or access to the system,” says Black.
“It turns out you can get the network to behave like a smart market, in a rather simple way,” says Key.
Making It Work Though Key originally envisioned this system as a cure for Internet delays, Black and his colleagues, Paul Barham and Neil Stratford started using it on a smaller scale.
“You don’t have to change the whole network all at once. You can look at a very restricted environment where it’s much clearer who is in a position to set the rules of the game. If you were to try to change the whole Internet into a congestion pricing model you’d have the difficulty of who would police the economic worth of the various users on the Internet. Because, who is to say one user is worth more or less than another? Even if you did come up with such a scheme, how would you police it given the number of different companies, different network service providers, and so on. It’s a difficult problem,” adds Black.
However, in a constrained environment, such as a home or a private company, users could benefit from having an underlying economic model to control bandwidth access.
In the company, the powers that be could decide that a video conference that included the company’s top brass would get precedence over a video conference between single users, or even network email. They might be willing to trade some company-wide ability to access certain applications in favor of a smooth conference with major customers.
In the home, the parents could decide that their applications and use of the Internet had a greater value than their children’s online game play. Or, in the case of a former Cambridge researcher, Derek McAuley, the parent might be a game player who can’t have any delay in the time it takes for his data to travel across the network, called latency, or he’ll lose the game. McAuley became interested in the model that Key and Black are developing when he was playing the game Counterstrike. His opponent took out his character before he could get in his own hit because his children’s nanny was surfing the Web at the same time, causing high latency. If he’d had a congestion pricing system in place, he could have assigned more value to online game play over Web surfing.
Black has created a prototype system to test Key’s economic model. “We’re trying to get to the engineering level — it’s fine to think about the math and write the paper, but then you come to build it and it’s a bit harder. Real nuts and bolts engineering often is quite difficult,” says Black.
Recently Key and Dinan Gunawardena, a software developer at MSR, have applied the congestion pricing theory to network aware applications. Prototype versions of popular applications such as Windows Media and Windows Messenger adapt dynamically to the varying congestion price. The audio and video quality changes transparently when network conditions vary, giving a much better user experience.
The commons is an ancient organizing principle that worked well when the world was small, but failed with the advent of the agricultural revolution and larger populations. Once the Internet increased network traffic, it too became prone to the tragedy of the commons. Black and Key’s research suggests a way to allocate bandwidth fairly in the face of fluctuating user demands.