Category Archives: Comcast

Fairness is the ultimate end-game in network management

Update 8/12/2008 – Robb Topolski has graciously apologized for the personal attacks.

I have to admit that I was surprised at the speed at which I’ve been attacked on my latest FCC filing “Guide to protocol agnostic network management schemes“.  Within hours of my filing, Robb Topolski who now works for Public Knowledge wrote an attack blog “George Ou: Protocol Agnostic doesn’t mean Protocol Agnostic” which, along with a series of ad hominem attacks, basically accuses me of being hypocritical.  I won’t respond to the senseless personal attacks, but I am going to rebut the core accusation that favoring protocol protection in the context “Protocol Agnostic” network management schemes is somehow hypocritical.

Before we debate the meaning and context of the word “agnostic”, we must first ask ourselves what is the underlying problem we are trying to solve.  The answer is that we want a broadband network management system that is more equitable at distributing bandwidth fairly between users in the same service/price tier and we want a broadband network that ensures the best possible experience for every protocol and their associated applications.

Network Architect Richard Bennett put it best when he described network management as a two-phase process.  In his blog he recently wrote:

An ideal residential Internet access system needs to be managed in two different but equally important phases:1) Allocate bandwidth fairly among competing accounts; and then

2) Prioritize streams within each account according to application requirements.

Phase 1 keeps you from being swamped by your neighbor, and keeps you from swamping him, and Phase 2 prevents your VoIP session from being swamped by your BitTorrent session.

So to address the first problem, we need a way to measure bandwidth consumption between these “competing accounts”.  The common solution on the market place today computes bandwidth consumption implicitly and indirectly from the protocols passing through the network.  This method of looking at the protocol to determine bandwidth consumption was initially chosen because it is computationally simpler and cheaper than trying to track usage statistics of tens of thousands of simultaneous application flows in real-time. While this solution is mostly accurate, it isn’t completely accurate.

As the identification of protocols become increasingly difficult because of protocol obfuscation techniques and as the cost of computational power declines, we’re starting to see a shift towards network management systems that don’t use protocol analysis to determine which users consume the most bandwidth.  These new systems compute bandwidth distribution by tracking the bandwidth consumption of every flow on the network without looking at what protocols are passing through the system and these are the systems we refer to as “Protocol Agnostic”.  But being “agnostic” strictly applies to this first phase of network management.

The second phase of network management is to ensure that every protocol works well.  This requires knowledge of what protocols are passing through the system so that they can be given the necessary protections from other protocols sharing the same network.  Since protocols and their applications have unique network requirements, we must identify the protocols being used so we can give them the necessary protections.  It would be foolish to apply the concept of agnosticism towards this phase of management because doing so would make this phase worthless.

The reality is that a network can’t simply be a dumb pipe if we want every protocol/application to work well and no one has ever claimed that “protocol agnostic” should mean a dumb pipe.  P2P applications have the ability to not only grab nearly every bit of bandwidth available regardless of how much capacity there is; they can cause significant problems for real-time applications like VoIP or online gaming applications even when the P2P uses a small fraction of the network.  I have published experimental data showing that even low-throughput mild usage of P2P applications can cause significant spikes in network latency and jitter (latency and jitter explained here) which can severely degrade the quality of real-time applications.  Even without any knowledge of how networks operate, anyone who has ever tried to play online FPS (First Person Shooter) gaming or use a VoIP phone while a P2P application runs on the same broadband connection understands how miserable the experience is.  In fact, this is a common source of contention between roommates and family members.

This is precisely why Comcast needs to protect VoIP traffic from service providers like Vonage.  Even though Comcast has nothing to gain because their own telephony service runs outside of the Broadband network on a different frequency, these additional efforts to ensure proper VoIP operation are actually helping a competing telephony service.  In theory, Comcast could just leave this issue alone and let Vonage and other competing VoIP services suffer while its own telephony service remains immune.  In practice, many Comcast’s customers will blame Comcast if they had any problems once the new system is in place and Comcast needs to do everything it can to make their customers happy.

Robb Topolski claims on behalf of me that these problems are somehow caused by Comcast when he says that “George has shown that Comcast’s proposed Protocol Agnostic scheme has unacceptable side effects”.  Aside from the fact that I’ve said or shown nothing of a sort, it’s ridiculous to paint these noble and reasonable efforts in to something sinister.  The Free Press has used similar fear mongering tactics to discredit these new network management efforts by Comcast by questioning why Vonage needs these special deals with Comcast in order for their traffic to go through insinuating that this is some sort of protection racket.  The fact of the matter is that Comcast not only doesn’t block Vonage, they’re trying to prevent Vonage from being blocked by self-inflicted damage whenever the user tries to use aggressive protocols like P2P simultaneously with VoIP.

But these advocates of strict Net Neutrality and the “dumb” Internet continue to shout that this is tantamount to “discrimination” against P2P protocols if we provide protocol protection for VoIP or any other real-time applications.  They’ll often ask “why can’t you give P2P the same protection” but this question is silly when you look at the reality of the situation.  P2P applications don’t need this kind of protection and they’re thriving at 30 times faster than the VoIP application on a 3 Mbps broadband connection.

If we take an objective look at the situation, the P2P application is already consuming over 96% of the network capacity so would it wise to mandate a dumb network that allowed the P2P application to consume 98% of the network and completely break the VoIP application?  If the network could only fairly allocate 2 Mbps of bandwidth to its broadband customers during peak usage ours, would it be so wrong to expect the P2P application to drop down to 95% of the broadband link while the VoIP application remains at the same miniscule 87 Kbps bitrate?  I guess according to Robb Topolski, this is somehow unfair because the P2P application is forced to slow down while the VoIP application doesn’t slow down.

If this is the kind of petty game we’re playing, we could simply have a network management scheme that allocates “equal” amounts of bandwidth to P2P and VoIP whenever both protocols are active which means both protocols will operate at 87 Kbps and we could call that “equality”.  Obviously no one would want that kind of system and there’s no need to force this level of equality so the P2P application is allowed to consume whatever remaining bandwidth is left over from the VoIP application.  But taking what’s left over is beneficial to the P2P application because the leftovers are at least an order of magnitude more than what the VoIP application gets.

Robb Topolski then pulls out the DPI (Deep Packet Inspection) Bogeyman and claims that any sort of protocol aware system would violate user privacy by snooping beyond the packet headers and looking in to the user data.  That’s nonsense on multiple levels because a system wouldn’t necessarily need to look at user data to determine the protocol being used and even if it did, there’s nothing wrong with an automated system that parses through user data.  In fact millions of emails a day are parsed by anti-spam systems to prevent your email inbox from being inundated with spam but we don’t consider that an invasion of privacy because of the context and scope of the system.

But the smart network management system could simply look at the protocol headers without parsing through any of the user data to determine what protocol is being used because it doesn’t have to fear protocol masquerading techniques.  This is because the protocol agnostic part of the system already tracks usage statistics which allows the network operator to implement a priority budgeting system where everyone gets a certain amount of priority delivery bandwidth.  So for example, if everyone had a daily budget of 12 hours of priority VoIP service or 12 hours of priority online gaming service or 3 hours of priority video conferencing, they’re free to squander that budget in 30 minutes with a burst of high-bandwidth P2P usage masquerading as VoIP traffic.  After the daily allowance is used up, all traffic would go through but be treated as normal priority data.

Chief BT researcher Bob Briscoe who is leading an effort to reform TCP congestion control at the IETF actually explained this concept of saving up for priority quite nicely and network architect Richard Bennett has also spoken positively about this type of a system.  Priority budget schemes would actually help solve the fundamental fairness problem where some users consume hundreds of times more volume than the majority of users even though everyone pays the same price.  The network would essentially give the low-usage real-time application customers a responsive delivery service while the high-usage P2P customer would get all the volume they want and everyone would get what they want.

When we focus on the ultimate endgame of fairness, it isn’t hypocritical to advocate equitable distribution of bandwidth through protocol agnostic means while simultaneously advocating protocol-specific prioritization schemes that ensure the best possible experience for all people and all protocols.  These two goals are completely separate and completely honorable.

Innovation 08 panel on Net Neutrality at Santa Clara University

Thursday morning I sat on a panel at the Innovation 08 Net Neutrality event at Santa Clara University.  This came right at the heals of my Brussels trip where I gave a presentation on Net Neutrality to the some members of European Parliament and various industry folks.  The jet lag wasn’t so bad but the bigger problem for me was missing my 6 year old daughter’s first big singing solo at her school which had to be at the same time as my panel.  I spent a lot of time training her so it was certainly a big disappointment for me.  The jet lag certainly did have a lot to do with why this blog wasn’t posted earlier yesterday.


Richard Whitt, George Ou, Ron Yokubaitis, Richard Bennett, Jay Monahan
Photo credit: Cade Metz

The story didn’t get too much coverage (yet) but here we have some coverage from Cade Metz.  I guess it’s a slight improvement because Metz at least didn’t try to falsely insinuate that I was against transparency for Comcast this time.  It was gushing with love for Google but at least he quoted me accurately and got my point across.

Ou is adamant that – whether it (Net Neutrality rules) forbids ISPs from prioritizing apps and services or it forbids them from selling prioritization – neutrality regulation would actually prevent things like video and voice from flourishing on our worldwide IP network. “If you forbid prioritization, you forbid converged networks,” he said. “And if you forbid converged networks, you get a bunch of tiny networks that are designed to do very specific things. Why not merge them into one fat pipe and let the consumer pick and choose what they want to run?

This is such an important point because latency/jitter is a killer for real-time applications like VoIP, gaming, and IPTV.  As I showed in my research, even mild usage of BitTorrent on a single computer in a home can ruin the experience for everyone in that home.  If prioritization technology is banned in Broadband, then we’ll simply end up with less functional broadband and we’ll have a statically separated IPTV service.  With a converged IP broadband network that delivers IPTV and Internet access, the consumer gets a massive converged pipe and they have the power of control at their fingers when they turn off the IPTV to free up all that bandwidth for their Internet service.  If the Government prohibits intelligent networks that guarantee quality of service, ISPs will be forced to separate their TV and Internet pipe with a fixed boundary and the consumer gets left with a permanent slow lane rather than getting a slow lane plus a fast lane that they can dynamically allocate to their TV or their Internet.

Metz also couldn’t resist from taking a personal jab at me and Richard Bennett:

“The panel also included George Ou and Richard Bennett, two networking-obsessed pals who have vehemently defended Comcast’s right to throttle peer-to-peer traffic, and Whitt received more than a few harsh words from Ou.”

The disparaging tone was uncalled for and when you put it side by side with the treatment he gave to Google, the bias is blatantly obvious and journalistically unprofessional.

Metz swooned for Google’s

“The question was raised by the top level management at Google: What do we think about network neutrality Ð about this notion that broadband companies have the power to pick winners and losers on the internet?” Whitt explained. “One position was that in the environment [proposed by Whitacre], Google would do quite well.”This side of the argument said: We were pretty well known on the internet. We were pretty popular. We had some funds available. We could essentially buy prioritization that would ensure we would be the search engine used by everybody. We would come out fine Ð a non-neutral world would be a good world for us.”

But then that Google idealism kicked in.

Idealism huh?  Too bad Metz left out the part where Google’s Whitt admitted that they were against network intelligence and enhanced QoS even though he refused to answer a simple yes/no question on whether he and Google support the actual Net Neutrality legislation.  Make no mistake, Google’s position is based on crippling their video competitors in the IPTV market which is critical to adding competition in the Cable and Satellite TV market which is far more expensive and relevant to every-day Americans.  It has nothing to do with Google idealism.

Ron Yokubaitis went off with the typical spiel about how DPI (Deep Packet Inspection) was all about violating user privacy, reading consumer’s email to inject ads, a tool of the big bad RIAA/MPAA for figuring out what song or movie you’re downloading, and how this was similar to communist China.  Yet DPI has nothing to do with reading email since that is a function of spam filters and it has nothing to do with violating people’s privacy.  DPI is merely a mechanism that analyzes which protocol someone is using and it really isn’t a method used by the MPAA and RIAA.

I also pointed out that it’s ironic that it is companies like Google who wants to inject ads and data mine your Gmail account and it was ironic that we bash the telecoms when it’s companies like Google that censors information from the Chinese people.  I’m also reminded that people were imprisoned in China for simply speaking out because of search engine providers like Yahoo turning them in to the Government.  Perhaps this wasn’t a great tangent for me to go off on but I get irritated by the wrongly focused attacks on ISPs when it’s often more appropriate for search engine companies.

Richard Bennett also posted something about this event and wrote

What really happened is this: Google has invested hundreds of millions of dollars in server farms to put its content, chiefly YouTube, in an Internet fast lane, and it fought for the first incarnation in order to protect its high-priority access to your ISP. Now that we’re in a second phase that’s all about empowering P2P, Google has been much less vocal, because it can only lose in this fight. Good P2P takes Google out of the video game, as there’s no way for them to insert adds in P2P streams. So this is why they want P2P to suck. The new tools will simply try to convince consumers to stick with Google and leave that raunchy old P2P to the pirates.

I’m not so sure if Google really feels threatened by P2P since P2P cannot deliver a good on-demand streaming experience beyond 300 Kbps or whatever the common broadband upstream speed is. That’s the problem with out-of-order delivery from a bunch of peers that may or may not be there unless it had several times more seeders than downloaders. Since the normal ratio is several times more downloaders than seeders, you simply can’t do high-bandwidth in-order delivery of video. This is why you’re not seeing YouTube take a dive in popularity and every instant-play site uses the client-server CDN delivery model.

The main reason P2P is so popular is because there is so much “free” (read pirated) content available. The actual usability and quality sucks compared to commercial video on demand services. Not only do you get lower quality and lower bitrates in the 1 to 1.5 Mbps range, you have to wait hours for the video to finish before you can start watching it and it even hogs your upstream bandwidth in the process. Legal for-pay services such as Netflix all use the client-server CDN (Content Distribution Network, caching technology) delivery model because it offers an instant play experience and the video quality is much higher quality at 4 Mbps. Other services like Microsoft’s Xbox Live Market Places use client-server CDN to deliver roughly 6.9 Mbps.

While it may be possible to get 6.9 Mbps from a P2P client, it’s rare that a single Torrent will be healthy enough to hit that speed and it certainly won’t arrive in order making it impossible to view while you download.

More deceptions from Free Press about Comcast “blocking”

The whole Comcast issue is being kicked around in the press in recent days because the Max Planck Institute released a study showing the rates of TCP resets happening throughout the world.  But this whole issue is being mischaracterized as the “blocking” of BitTorrent and it’s being portrayed as a free speech issue when it is nothing of a sort.

Richard Bennett explained why this shouldn’t be considered blocking and Andrew Orlowski wrote a pretty good editorial raising the concern that this is trivializing real free speech violations. One thing in Orlowski editorial that really caught my eye was the continual misrepresentation of the facts by people like Ben Scott of the Free Press.

Ben Scott said: “I would disagree with your characterization of RST packets. This is in fact blocking by definition. It think your analogy is inapt. It would be the equivalent of traffic stops sending me back home to start driving to work all over again.”

Ben Scott explained to Orlowski that his portrayal of the facts are backed up by “experts” so I have to wonder who these experts are?  Jon Peha?  Peha demonstrated at Stanford and in his FCC filing that he doesn’t even understand the multi-stream principle or the auto-restart feature in P2P applications like BitTorrent.  Even when every TCP stream (often up to 40 streams) of a P2P download are being continuously reset and a full blockage is in place, you never start all over again.  Jon Peha and Ben Scott would have you believe that BitTorrent forces the user to manually resume like a phone call that’s been disrupted and that BitTorrent has to restart a file transfer from scratch.  While I and others like Richard Bennett have pointed this out to the FCC and directly to Jon Peha, these deceptions continue to be propagated.

Then there are other “experts” like Software Tester Rob Topolski and some unnamed professors that Free Press lawyer Melvin Ammori likes to cite.  They claim that BitTorrent can “only” use 4 upstream TCP flows per Torrent which on the face of it is laughable because 4 is a soft limit that can easily be exceeded.  The bigger problem with this claim is that uploads aren’t the issue since uploads continue on indefinitely and uploads aren’t the endgame.  What should be looked at are the number of completed file transfers which is the real endgame and the number of upload streams an individual P2P seeder can offer is irrelevant.  The uploaders are merely contributing to a pool of available P2P servers so what matters to the completion of file transfers is the time it takes to download a file.

If Jon Peha, Rob Topolski, and Ben Scott are actually correct in their description of Comcast’s Sandvine system, then a good case can be made that what Comcast is doing is not reasonable network management.  But the arguments they make are easily proven wrong and BitTorrent corporation or any P2P software company would take offense at the claim that their product is incapable of automatic resume and incapable of resuming where they stopped.

A traffic light does “block” cars at stop signs for a short duration and even longer at stop lights.  But when that stop light is broken and it defaults to a stop sign, overall traffic begins to flow much slower.  If we take the stop sign out and let people go whenever they feel like it and there’s a collision, then cars gets ensnared in a severe traffic jam.  This is precisely what happens when you have no reasonable network management and you get less effective throughput so applications (including P2P) are actually sped up by reasonable network management.

It’s true that Comcast’s management scheme has some problems because it harms rare torrents that are unhealthy to begin, but there are workarounds for these problems and the problems are overblown.  I even showed how you can force Comcast to seed you 100 times faster without consuming your own upstream or your neighbor’s upstream.  So while it’s reasonable to argue that Comcast’s P2P management system is far from ideal and needs improvement (which Comcast promises to do by the end of this year), it’s flagrant deception to portrait this as application blocking and it’s ridiculous to turn this in to a free speech debate.

How Comcast customers can seed 100 times faster and bypass TCP resets

As many of you reading this blog probably already know, Comcast has been disconnecting a certain percentage of TCP streams emanating from BitTorrent and other P2P (peer-to-peer) seeders.  This effectively delays and degrades the ability of Comcast customers to seed files using P2P applications.  For normal healthy Torrents that are distributed across multiple users and multiple ISPs, losing a few seeders intermittently isn’t too noticeable.  But for rare Torrents or Torrents that have to originate from a Comcast broadband customer, this can pose some challenges.  The rare Torrent becomes even less reliable than they already are while popular Torrents originating from Comcast’s broadband network take much longer to become healthy.

While Comcast has stated they will try to move off of their Sandvine system that uses TCP resets by the end of this year, there’s no guarantee that they will complete on schedule and there’s no relief in the mean time for customers who are having a tough time seeding their files for distribution.  Even without the challenges posed by TCP resets, seeding a torrent file is still problematic and burdensome.  Not only does the seeder have to turn his/her computer in to a server, they must also allocate significant portions of their upstream bandwidth – as well as their neighbor’s bandwidth which they share – to seeding while providing relatively minimal capacity to the Torrent.

The opposite challenges of client-server and peer-to-peer

One way around this is to use the gigabyte of web space that Comcast provides you, but there are downsides to a pure HTTP client-server model.  While an HTTP client-server model works great because it’s persistent and the web server offers much higher capacity than any individual seeder, there’s a fixed amount of web server capacity which quickly divides as the number of clients increases.  That means client-server works great when there are fewer clients but it degrades in proportion to the number of clients who want to use the server.

The P2P model has just the opposite problem in that it works better and better as more clients/peers are added because each client becomes an additional server, but it’s horrible when there are few clients and quickly accelerates to death as it loses popularity.  The less popular a Torrent is the less likely it is to attract peers because people tend to avoid less popular Torrents fearing slower speeds or being stuck with a Torrent that can never complete.

Combining the two file distribution models

Given the fact that client-server is weak where peer-to-peer is strong and the fact that client server is strong where peer-to-peer is weak; I thought what if we combined the two models in to a single client or a single protocol that gave you the best of both worlds!  All we need to do is have a BitTorrent pull from an HTTP source that remains persistent so that there will always be a fast seed to fall back upon even when there are no other peers in the Torrent.

This not only guarantees a survivable Torrent which attracts more peers, it also provides a rocket boost in performance because web servers tend to be 50 to 100 times faster than any Broadband consumer seed.  The original seeder would be able to seed and forget and avoid saturating his own upstream connection as well his neighbor’s upstream connection.  Given the fact that a typical Cable ISP operating DOCSIS 1.1 protocol allocates 1.25 megabytes/sec of upstream capacity for 200-400 users in a neighborhood, a hybrid web seed would not only benefit consumers but the ISP as well because it alleviates the biggest bottle neck on Cable ISP’s network.

But before I could celebrate my “new” idea and invention, Ludvig Strigeus (creator of the best and most popular BitTorrent client uTorrent) with tongue placed firmly in cheek explained that I was a few years late in my discovery and that the concept of web seeding was already proposed by John Hoffman and “DeHackEd”.  Furthermore, the feature is already implemented in uTorrent as well as the official BitTorrent client called “BitTorrent” and Azureus all support web seeding.  Those three clients command the lion’s share of BitTorrent clients and there is a full list of compatible clients here.

Note: It’s not your imagination that BitTorrent looks just like uTorrent with a few extra features; BitTorrent Corp acquired uTorrent from Ludvig Strigeus at the end of 2006 because it was technically superior.

Web seeding in practice

One of the problematic Torrents that has been hotly contended in the Comcast TCP reset fiasco is the King James Bible.  The is a classic example of something that isn’t popular in BitTorrent because it’s relatively small and readily available on the world wide web over HTTP but it still serves as a good example of what happens to rare Torrents using the pure peer-to-peer model.  I created a web seeded Torrent for the King James Bible using Richard Bennett’s gigabyte of web space provided at no extra charge by his ISP Comcast to serve as a persistent and lightning fast seeder.  I also took this traditional P2P Torrent of Leonardo da Vinci’s complete Notebooks and converted it to this web seeded torrent.

The difference in speed for this rare torrent was an astounding 100 fold increase in performance.  The reliable of this Torrent also shot up because it doesn’t depend on an individual seeder in a residential Cable broadband network who may shut off his computer or P2P application or get TCP reset.  The web seed sits outside of the Cable broadband network in some ultrafast server farm that is spared from TCP resets.  The ultimate irony here is that Comcast who is known to slow down everyone’s BitTorrent seeds is now giving you a free 100 fold boost in performance!  Not only that, but they’re alleviating their last-mile congestion as well as giving you more reliability.

How to create your own web seed

By now I’m sure you want to know how to create your very own web seed Torrent.  Simply fire up a copy of uTorrent or BitTorrent and press Control-N or go to the File menu and hit “Create new Torrent”.

You will see the screen on the right and all you need to do is choose a source file from your hard drive, enter in a list of trackers you plan on using, then hit “Create and save as”.

Once you save the .Torrent file, the next step is to copy the source file (the book, video, or song) you want to distribute up to a publicly accessible web server such as the free web space Comcast provides their customers.  Many other ISPs also provide free web space if you look around or ask.

Update 5/1/2008Robb Topolski posted a comment for this article saying that the example I’m using is actually the getright web seeding specification.  Thanks for the clarification Robb!

At this point in time there is no automated support for creating web seeds so you’re going to have to use a program called BEncode Editor to modify the .Torrent file you just created.  Once you open up the .Torrent file with BEncode Editor, you will see the following window.

All you need to do is hit the + button and add a “url-list” to the root.  You will see the following window where you will add a list of URLs that have a copy of the file that you want to distribute.  Note that if you want to distribute multiple files, you point to the folder containing all those files and not just the file.  Also note that Apache web servers don’t properly work for filenames with spaces for web seeding while the IIS 7.0 server I tested did work for web seeding even if the filename had spaces in it.

The final step is to publish the .Torrent file on your preferred tracker.  For the purpose of this discussion, I used ThePirateBay.org as the tracker and uploaded the files there after creating an account on their site.  After I finished uploading the .Torrent file, I got this page which gives you a public link for the .Torrent that you can distribute on a website or via email.  You can also email people the .Torrent file directly or you can direct link to the .Torrent file from the tracker.  At this point you don’t even need to have your computer participate as a seeder and the seed is extremely fast and reliable.  You can even publish the link to the HTTP site if people don’t want to or can’t (office policy) use the Torrent version and those people can download from their web browser directly.

The final outcome is that Comcast doesn’t have to worry about you hogging all the upstream bandwidth in your neighborhood, you free up your computer and bandwidth, the seed runs up to 100 times faster, and now everyone’s happy because of a technical solution.

Comments on inaccurate testimony at the FCC Stanford hearing

Update: How Comcast customers can seed 100 times faster and bypass TCP resetsThe FCC hearing at Stanford University on April 17th 2008 was filled with inaccurate testimony from various witnesses.  Since that testimony seems to be carrying significant weight both on Capitol Hill and in the media, I feel compelled to set the record straight.  I have filed a copy of this letter on FCC docket 07-52.

Problems with Jon Peha’s testimony
Jon Peha testified that BitTorrent was like a telephone and implied that if a TCP reset is used by Comcast to stop a TCP stream, then that constituted a blockage of BitTorrent.  Furthermore, Professor Peha implied through his telephone analogy that if BitTorrent is blocked, then the user must manually redial to reestablish the connection.  These assertions are highly inaccurate and here’s why.

The first problem is that Jon Peha did not understand the multi-stream aspect of BitTorrent or P2P.  Peha seemed very surprised immediately before our panel at Stanford when I told him that a P2P download typically used 10 to 30 TCP streams at the same time.  His surprised reply to me was “all active?” and I replied yes.  The reality is that if a certain percentage of BitTorrent TCP streams are reset and temporarily blocked by an ISP, say 15% for example1, then the “Torrent” (the file that’s being exchanged amongst multiple peers over the BitTorrent protocol) is essentially slowed down by an average of 15%.  In other words, the “Torrent” would suffer a 15% partial blockage which is accurately described as a “delay” since the file transfer didn’t actually stop.  This would be like filling up a bath tub with 20 faucets and you closed 3 of those faucets.  The rate of water flowing in to the tub would slow but not stop.

The second problem with Jon Peha’s testimony is his implication that the user must take some sort of action to resume the BitTorrent connection or else the connection wouldn’t resume.  Peha’s assertion can easily be proven false by a simple experiment with BitTorrent.  One can easily confirm that BitTorrent will always resume a lost connection within a matter of seconds without any user intervention just by physically disconnecting a network cable on the test machine and reconnecting it.  Not only does BitTorrent automatically resume, it picks up where it left off and does not need to start all over again.  So even if all TCP streams in a Torrent were simultaneously blocked for a short period of time, it will quickly resume by itself and eventually finish the file transfer.  Therefore this is by definition a “delay” and not a “blockage”.

This is not to say that Comcast’s existing form of network management is without problem because it is clear that the system has flaws and unintended consequences like the accidental blockage of IBM Lotus Notes.  The use of TCP resets also have a more drastic effect on rare Torrents, which are BitTorrent files that are not popular and have few seeders or other peers who have parts of the file to download from.  These rare Torrents aren’t healthy to begin with and a TCP reset can in some cases trigger a complete temporary blockage.  The rare Torrent will still get there eventually but it will suffer significantly more than a normal Torrent that is resilient to partial blockage.

It should be noted that BitTorrent in general is not an appropriate file transfer protocol for rare Torrents.  BitTorrent tracker sites tend to rank and sort a Torrent based on the Torrent’s “health” which is based on the number of available seeders and pre-seed peers.  Users generally tend to avoid the “unhealthy” Torrents on the bottom of the list.  Since Comcast offers a vastly superior alternative where they provide 1 gigabyte of web storage space, Comcast customers can use that service to distribute files 10 to 20 times faster than any single Comcast BitTorrent seeder could ever provide.  To further illustrate this point, Richard Bennett

Problems with Robert Topolski’s testimony
Robert Topolski also had problems in his testimony.  Topolski, a software tester who does not work in the networking field, insists that the TCP reset mechanism isn’t common in network devices and declared that I was wrong in my testimony.  In my experience as a Network Engineer who designed and built networks for Fortune 100 companies, it is my experience that the TCP reset mechanism is common in routers and firewalls.  For many years, Internet service providers, including LARIAT (owned and operated by Brett Glass, who has filed comments in this docket) have used TCP RST packets to protect the privacy of dialup Internet users. When a dialup user’s call is over, RST packets are sent to terminate any remaining connections. This prevents a subsequent caller from receiving information — some of which might be confidential — that was intended for the caller who disconnected. Thus, the transmission of RST packets by a device other than the one(s) which established a connection — for the purpose of informing the endpoints that the connection has been terminated — is not only commonplace but salutary.  Network architect Richard Bennett who works for a Router maker explained to me that TCP resets are the standard mechanism used by consumer routers to deal with NAT table overflow, which itself is typically caused by excessive P2P connections.  Are we to believe a single software tester or three networking experts?

The second key problem with Topolski’s testimony is that to my knowledge, he has never provided any forensic data from Comcast in the form of packet captures that can be independently analyzed.  Even if Topolski did produce packet captures and we assumed that those packet captures are authentic, one man’s packet captures wouldn’t be a large enough sample to draw any conclusions by any legal or scientific standards.  The Vuze data may constitute a large enough sample but the data isn’t very granular because it doesn’t tell us what percentage of reset TCP sessions are due to an ISP versus other possible sources

Furthermore, even if a TCP reset was used by Comcast at 1:45AM, we cannot assume that there was no spike in congestion at 1:45AM.  As I have indicated in the past, just 26 fulltime BitTorrent seeders in a neighborhood of 200 to 400 users can consume all of the available upstream capacity in a DOCSIS 1.1 Cable Broadband network.  That means less than 10% of the population seeding all day and night can cause congestion at any time of the day.  Based on what little evidence presented by Robb Topolski, no conclusions can be drawn regarding the question of whether Comcast uses TCP resets for purposes other than congestion management.


1. The reason I use 15% as the example is because the Vuze data gathered via thousands of user’s computers indicated that a Comcast broadband network typically suffered 14% to 23% for all TCP streams during 10-minute sampling periods.  That 23% figure is not restricted to just BitTorrent or P2P traffic and even those TCP resets pertaining to BitTorrent aren’t necessarily from Comcast.  It’s quite possible that the reset actually came from the client on the other end or it may have come from a customer-premise router on either end-point trying to conserve on NAT (Network Address Translation) resources.  It is undeniable that a certain percentage the TCP reset have nothing to do with Comcast’s network management practices.  We really can’t know what percentage of those TCP resets were due to Comcast and figuring out the exact percentage isn’t trivial because there are so many factors to consider.Note: Any assertions on behalf of Brett Glass and Richard Bennett that I have made in this document have been approved by Brett Glass and Richard Bennett.

FCC hearings at Stanford

Here are the slides I presented at the FCC Network Management hearing in Stanford and here’s a letter version I sent to the FCC which gives a much fuller explanation.  While the letter is a long read (3000+ words), I really hope you take the time to read it since it gives you the full picture of what I was trying to present.  Vontv.net has the event on video though they unfortunately edited out quite a bit of stuff.

The hearings at Stanford on Thursday was probably one of the most hostile and uncivilized crowd you can expect and it was sad to see such a circus act.  I didn’t feel too good about my presentation and missed a lot of key points since I was somewhat rattled and veered off my own script in my hands.  Despite knowing what I might be in for given Richard Bennett’s experience, I still got rattled and panicked a bit when the Commissioner skipped over me and then spent valuable time to apologize while my measly 5 minute clock was ticking.  Before I could finish introducing my name, the “Raging Grannies” started to shout me down screaming “WHO PAID YOUR WAY GEORGE!” when I’ve never taken money for any political activities from anyone in my entire life.  Maybe I need to grow some thicker skin but I was bothered by the raging grannies who didn’t even know who I was and shouldn’t have had any beef with me other than the fact that someone told them that they should shout me down.

I was disappointed by the audience because I thought surely that law students and other Stanford students would be there since this was such an elite institution of higher learning.  I thought surely these kids would be interested in an FCC hearing and even if they were going to come in with a bias, at least they would try to listen to an honest debate.  But other than one Stanford Grad Student I saw and spoke to, what you had were a bunch of people who basically had their cue to cheer and their cue to boo and they already knew to shout down George Ou right off the bat.  By the end of the day during public comments, half of the people were from Poor Magazine and they all went on to vent their rage at something or anything corporate or America.  One guy indicated that he wanted to injure the economist on the second panel while another pleaded with the FCC to stop the defense department from implementing the secret Internet2 project designed to give the military control of the Internet.  Maybe my expectations were a little unrealistic and overly romanticized, but this was downright ridiculous.

Even more ridiculous was Larry Lessig’s excruciating 50 minute circus act of an “opening speech” which single handedly eliminating nearly all of our break session.  Lessig pulled all the stops including one word at a time slides running many hundred slides long and the usual misleading and inflammatory statements that “Verizon blocked text messages” when no text messages were ever blocked.  Verizon merely had a one day bureaucratic snafu in approving a 5-digit short code phone number for abortion rights group NARAL and they quickly apologized but people like Larry Lessig and Tim Wu continue to mislead the public that text messages were blocked.  Lessig (Stanford Professor), Harold Feld from Media access, Rob Topolski (Software Quality Assurance Engineer), and Barbara van Schewick (Stanford Professor) when on forever during the panel while Brett Glass and myself got cut off.