Update 8/12/2008 – Robb Topolski has graciously apologized for the personal attacks.
I have to admit that I was surprised at the speed at which I’ve been attacked on my latest FCC filing “Guide to protocol agnostic network management schemes“. Within hours of my filing, Robb Topolski who now works for Public Knowledge wrote an attack blog “George Ou: Protocol Agnostic doesn’t mean Protocol Agnostic” which, along with a series of ad hominem attacks, basically accuses me of being hypocritical. I won’t respond to the senseless personal attacks, but I am going to rebut the core accusation that favoring protocol protection in the context “Protocol Agnostic” network management schemes is somehow hypocritical.
Before we debate the meaning and context of the word “agnostic”, we must first ask ourselves what is the underlying problem we are trying to solve. The answer is that we want a broadband network management system that is more equitable at distributing bandwidth fairly between users in the same service/price tier and we want a broadband network that ensures the best possible experience for every protocol and their associated applications.
Network Architect Richard Bennett put it best when he described network management as a two-phase process. In his blog he recently wrote:
An ideal residential Internet access system needs to be managed in two different but equally important phases:1) Allocate bandwidth fairly among competing accounts; and then
2) Prioritize streams within each account according to application requirements.
Phase 1 keeps you from being swamped by your neighbor, and keeps you from swamping him, and Phase 2 prevents your VoIP session from being swamped by your BitTorrent session.
So to address the first problem, we need a way to measure bandwidth consumption between these “competing accounts”. The common solution on the market place today computes bandwidth consumption implicitly and indirectly from the protocols passing through the network. This method of looking at the protocol to determine bandwidth consumption was initially chosen because it is computationally simpler and cheaper than trying to track usage statistics of tens of thousands of simultaneous application flows in real-time. While this solution is mostly accurate, it isn’t completely accurate.
As the identification of protocols become increasingly difficult because of protocol obfuscation techniques and as the cost of computational power declines, we’re starting to see a shift towards network management systems that don’t use protocol analysis to determine which users consume the most bandwidth. These new systems compute bandwidth distribution by tracking the bandwidth consumption of every flow on the network without looking at what protocols are passing through the system and these are the systems we refer to as “Protocol Agnostic”. But being “agnostic” strictly applies to this first phase of network management.
The second phase of network management is to ensure that every protocol works well. This requires knowledge of what protocols are passing through the system so that they can be given the necessary protections from other protocols sharing the same network. Since protocols and their applications have unique network requirements, we must identify the protocols being used so we can give them the necessary protections. It would be foolish to apply the concept of agnosticism towards this phase of management because doing so would make this phase worthless.
The reality is that a network can’t simply be a dumb pipe if we want every protocol/application to work well and no one has ever claimed that “protocol agnostic” should mean a dumb pipe. P2P applications have the ability to not only grab nearly every bit of bandwidth available regardless of how much capacity there is; they can cause significant problems for real-time applications like VoIP or online gaming applications even when the P2P uses a small fraction of the network. I have published experimental data showing that even low-throughput mild usage of P2P applications can cause significant spikes in network latency and jitter (latency and jitter explained here) which can severely degrade the quality of real-time applications. Even without any knowledge of how networks operate, anyone who has ever tried to play online FPS (First Person Shooter) gaming or use a VoIP phone while a P2P application runs on the same broadband connection understands how miserable the experience is. In fact, this is a common source of contention between roommates and family members.
This is precisely why Comcast needs to protect VoIP traffic from service providers like Vonage. Even though Comcast has nothing to gain because their own telephony service runs outside of the Broadband network on a different frequency, these additional efforts to ensure proper VoIP operation are actually helping a competing telephony service. In theory, Comcast could just leave this issue alone and let Vonage and other competing VoIP services suffer while its own telephony service remains immune. In practice, many Comcast’s customers will blame Comcast if they had any problems once the new system is in place and Comcast needs to do everything it can to make their customers happy.
Robb Topolski claims on behalf of me that these problems are somehow caused by Comcast when he says that “George has shown that Comcast’s proposed Protocol Agnostic scheme has unacceptable side effects”. Aside from the fact that I’ve said or shown nothing of a sort, it’s ridiculous to paint these noble and reasonable efforts in to something sinister. The Free Press has used similar fear mongering tactics to discredit these new network management efforts by Comcast by questioning why Vonage needs these special deals with Comcast in order for their traffic to go through insinuating that this is some sort of protection racket. The fact of the matter is that Comcast not only doesn’t block Vonage, they’re trying to prevent Vonage from being blocked by self-inflicted damage whenever the user tries to use aggressive protocols like P2P simultaneously with VoIP.
But these advocates of strict Net Neutrality and the “dumb” Internet continue to shout that this is tantamount to “discrimination” against P2P protocols if we provide protocol protection for VoIP or any other real-time applications. They’ll often ask “why can’t you give P2P the same protection” but this question is silly when you look at the reality of the situation. P2P applications don’t need this kind of protection and they’re thriving at 30 times faster than the VoIP application on a 3 Mbps broadband connection.
If we take an objective look at the situation, the P2P application is already consuming over 96% of the network capacity so would it wise to mandate a dumb network that allowed the P2P application to consume 98% of the network and completely break the VoIP application? If the network could only fairly allocate 2 Mbps of bandwidth to its broadband customers during peak usage ours, would it be so wrong to expect the P2P application to drop down to 95% of the broadband link while the VoIP application remains at the same miniscule 87 Kbps bitrate? I guess according to Robb Topolski, this is somehow unfair because the P2P application is forced to slow down while the VoIP application doesn’t slow down.
If this is the kind of petty game we’re playing, we could simply have a network management scheme that allocates “equal” amounts of bandwidth to P2P and VoIP whenever both protocols are active which means both protocols will operate at 87 Kbps and we could call that “equality”. Obviously no one would want that kind of system and there’s no need to force this level of equality so the P2P application is allowed to consume whatever remaining bandwidth is left over from the VoIP application. But taking what’s left over is beneficial to the P2P application because the leftovers are at least an order of magnitude more than what the VoIP application gets.
Robb Topolski then pulls out the DPI (Deep Packet Inspection) Bogeyman and claims that any sort of protocol aware system would violate user privacy by snooping beyond the packet headers and looking in to the user data. That’s nonsense on multiple levels because a system wouldn’t necessarily need to look at user data to determine the protocol being used and even if it did, there’s nothing wrong with an automated system that parses through user data. In fact millions of emails a day are parsed by anti-spam systems to prevent your email inbox from being inundated with spam but we don’t consider that an invasion of privacy because of the context and scope of the system.
But the smart network management system could simply look at the protocol headers without parsing through any of the user data to determine what protocol is being used because it doesn’t have to fear protocol masquerading techniques. This is because the protocol agnostic part of the system already tracks usage statistics which allows the network operator to implement a priority budgeting system where everyone gets a certain amount of priority delivery bandwidth. So for example, if everyone had a daily budget of 12 hours of priority VoIP service or 12 hours of priority online gaming service or 3 hours of priority video conferencing, they’re free to squander that budget in 30 minutes with a burst of high-bandwidth P2P usage masquerading as VoIP traffic. After the daily allowance is used up, all traffic would go through but be treated as normal priority data.
Chief BT researcher Bob Briscoe who is leading an effort to reform TCP congestion control at the IETF actually explained this concept of saving up for priority quite nicely and network architect Richard Bennett has also spoken positively about this type of a system. Priority budget schemes would actually help solve the fundamental fairness problem where some users consume hundreds of times more volume than the majority of users even though everyone pays the same price. The network would essentially give the low-usage real-time application customers a responsive delivery service while the high-usage P2P customer would get all the volume they want and everyone would get what they want.
When we focus on the ultimate endgame of fairness, it isn’t hypocritical to advocate equitable distribution of bandwidth through protocol agnostic means while simultaneously advocating protocol-specific prioritization schemes that ensure the best possible experience for all people and all protocols. These two goals are completely separate and completely honorable.