Category Archives: Networking

Bizarre DHCP server error solved

Here’s a solution for one of the oddest errors I’ve ever seen. We had a PC which had been wiped and the OS reinstalled, but it could not get an IP address from DHCP. The error message we got was:

“An error occurred while renewing interface Local Area Connection : The name specified in the network control block (NCB) is in use on a remote adapter.”

So we wiped the OS and tried again (with a different version of Windows this time), same error. We did a bit of digging, and we found a reservation in DHCP (this is a DHCP server on Windows 2003 R2) for the machines MAC but with the PC’s original name. We deleted that reservation and tried again. Same error. We did everything we could to make it “take”… restarted the DHCP service, rebooted the client, etc. Packet captures showed nothing unusual either.

Upon further investigation, we found a second reservation in DHCP which looked odd; it had the same IP address as the first one that we deleted but it had an entirely different MAC and name. We deleted that reservation anyways, and BANG. It works fine.

J.Ja

Windows 7 and Microsoft Network Monitor error solved

When I tried to start a capture using Microsoft Network Monitor 3.3 on Windows 7, I received the following error:

None of the network adapters are bound to the netmon driver.

If you have just installed, you may need to log out and log back in order to obtain the proper rights to capture.
Please refer to the product help for more information.

I’ve found that a simple “Run as Administrator…” resolves the problem. Sorry Wireshark users, that relies on a component (winpcap) which does not install on W7 at this time.

J.Ja

Animated presentation explaining the need for a prioritized Internet

Quality of Service (QoS) Network prioritization is a very complex technology that is often misunderstood and maligned. Because it is difficult to explain in words and pictures alone, I’ve created a 7 minute animated presentation that attempts to simplify the concept for non-engineers. The first few slides are mostly text so you can skip ahead at your own pace. Please enjoy the presentation.

Full story »

ISPs have a duty to block malicious traffic

AT&T and other ISPs stops DDoS attack from 4chanMass media and blogosphere hysteria ensued after several ISPs (including AT&T) responded to customer complaints and blocked an IP address that was transmitting massive amounts of Denial of Service (DoS) traffic. For something as routine as and essential as blocking a malicious attack from a computer on the Internet, all hell broke loose late Sunday evening and early Monday morning because the IP address belonged to a popular image sharing site called 4chan whose members are infamous for perpetrating porn flooding pranks on YouTube as well as organizing DoS attacks against other websites.

Read the rest at DigitalSociety.org

Does Verizon really deserve criticism for deploying the most fiber

Five months ago, Saul Hansell of the New York Times took Verizon behind the woodshed for being the corporate equivalent of the village idiot.  Verizon, according to Hansell’s source Craig Moffett, was giving consumers a Maserati for the price of a Volkswagen because Verizon’s new FiOS service was based on super expensive Fiber To The Home (FTTH) technology.  Yesterday, Mr. Hansell took Verizon behind the woodshed again because Verizon might stand to benefit the most from the broadband stimulus tax credits for deploying the most FTTH.  But given the fact that Verizon has been such a good corporate citizen by being the only large telecom operator to risk huge amounts of capital to deploy fiber to the home, would it not stand to reason that Verizon deservedly gets a proportional benefit from tax credits designed to spur this kind of behavior?

One of Saul Hansell’s assertions is that Verizon could get $1.6 billion over next two years for doing nothing different than Verizon had already planned to do.  This of course assumes that Verizon was not going to scale back their FTTH plans as a prudent response to the economic crisis where consumers are tightening their belts on spending.  It also assumes that Verizon wouldn’t increase their level of investments as a response to the tax credits.

Another gripe raised in Hansell’s piece is that tax credits apply to next generation broadband deployment for any residential subscriber and not just unserved, low-income and rural areas.  But is Mr. Hansell seriously suggesting that we should only incentivize next generation broadband for unserved, low-income and rural areas? The bulk of the broadband market does not fall into these three categories so if we want the biggest increase in broadband investments and therefore the most amount of economic stimulus and jobs, we cannot just focus on these three areas.

Most people don’t know the magnitude of risk that Verizon took to deploy FTTH despite early opposition from Verizon share holders.  Because of Verizon, the United States has some of the most FTTH deployment to detached single family homes which are very expensive to wire compared to dense multi dwelling units in countries like France, Korea, and Japan.  Verizon has had to invest more than $3,700 per subscriber (assuming 25% uptake) to give people the fastest and highest capacity broadband and television service in the nation.  Roughly half of the money spent by Verizon went directly to the hard working men and women that installed fiber to over 10 million homes.  Even the money spent on equipment and material put someone to work designing, manufacturing, marketing, and selling that equipment.

Despite all the criticism, the beauty of tax credits is the multiplier effect where one dollar of tax breaks can easily spur five dollars of private investment.  In ITIF’s January 2009 paper “The Digital Road To Recovery“, we estimated that a one year $10 billion investment in broadband networks will support approximately 498,000 new or retained jobs in the United States.  In this economic storm where workers are getting laid off left and right, stimulating labor intensive broadband projects sounds just like what the doctor ordered.  More broadband investments lead to more telecom jobs and more manufacturing jobs or at the very least it means fewer layoffs.  This translates to lower unemployment which not only means fewer unemployment checks paid out by the government, but ultimately more tax revenues from more workers which means the government and society gets a good return on investment.

How to restrict Windows OR ISA VPN by client IP address

Back in March, I set up our ISA 2006 server and configured the VPN on it. Since I have a static IP address at my home office, I created an access rule in ISA server that denied all external IP addresses access to the VPN. Me being me, I didn’t test, I assumed (big mistake) that this would work. Today, I found out that this was not sufficient to restrict VPN access by IP addresses. Searching the Internet, I did not find anyone with a good tutorial on how to do this, so here is mine.

  • Open up the “Routing and Remote Access” widget from “Administrative Tools”.
  • Go to the “Remote Access Policies” node in the tree.
  • Double click the “ISA Server Default Policy” item to bring up the properties.
  • Click “Add”
  • Select “Calling-Station-Id” and click “Add”.
  • The Calling-Station-Id field is a regular expression. The format is to put each IP address that is allowed within parenthesis, and separate each of these blocks with a pipe character.For example:(123.111.111.112)|(111.112.143.54)

    You may also use wildcard characters. (123.111.111..*) will match all IPs starting with 123.111.111, (123.111.111.11.) will match all IPs starting with 123.111.111.11. Note that the period in the regular expressions indicates “any character” and the asterisk means “match the preceding character any number of times”. This means that instead of typing the periods into the IP addresses, you need to “escape” them with a backslash as indicated above.

J.Ja

Polycom VTCs, Web Access, and ISA Server

If you have an ISA Server between a client and a Polycom VTC unit (video teleconference), you may have some problems using some features in the Web-based administration system. For example, the dropdowns in the “Directory” may be blank. After some time on the phone with Polycom today, we found a fix.

Here’s the root cause of the problem: to retreive the contents of some of the dropdowns, there are Java applets making calls to the VTC over the Web access port (by default, port 80). The VTC unit responds back to the first request with a status 401: requires authentication. The Java applet then responds to the VTC with the username and password. When the VTC receives this, it then sends its response. Unfortunately, the VTC in this scenario has decided to not send a proper HTTP response, even though it is an HTTP request; it omits the HTTP headers! As a result, ISA server sees a “broken” response, and returns an error 500 to the requesting Java applet. The way to detect this problem, is to look at the Java console (you’ll see it in the system tray) and look for errors involving HTTP status 500.

The way to fix this is quite easy. Go to the Admin Settings of the VTC, then “Security”, and change the port that the Web Access uses (at the bottom of the screen). Then provide an access rule in ISA server (or publish it, depending upon where in your network the VTC sits in relation to the clients access it), with a new protocol on the port you used. The reason why this workaround is effective, is because ISA server is no longer treating the communications as an HTTP conversation. As such, the traffic that the Polycom unit sends which does not properly adhere to the HTTP protocol will go through smoothly.

J.Ja

Fairness is the ultimate end-game in network management

Update 8/12/2008 – Robb Topolski has graciously apologized for the personal attacks.

I have to admit that I was surprised at the speed at which I’ve been attacked on my latest FCC filing “Guide to protocol agnostic network management schemes“.  Within hours of my filing, Robb Topolski who now works for Public Knowledge wrote an attack blog “George Ou: Protocol Agnostic doesn’t mean Protocol Agnostic” which, along with a series of ad hominem attacks, basically accuses me of being hypocritical.  I won’t respond to the senseless personal attacks, but I am going to rebut the core accusation that favoring protocol protection in the context “Protocol Agnostic” network management schemes is somehow hypocritical.

Before we debate the meaning and context of the word “agnostic”, we must first ask ourselves what is the underlying problem we are trying to solve.  The answer is that we want a broadband network management system that is more equitable at distributing bandwidth fairly between users in the same service/price tier and we want a broadband network that ensures the best possible experience for every protocol and their associated applications.

Network Architect Richard Bennett put it best when he described network management as a two-phase process.  In his blog he recently wrote:

An ideal residential Internet access system needs to be managed in two different but equally important phases:1) Allocate bandwidth fairly among competing accounts; and then

2) Prioritize streams within each account according to application requirements.

Phase 1 keeps you from being swamped by your neighbor, and keeps you from swamping him, and Phase 2 prevents your VoIP session from being swamped by your BitTorrent session.

So to address the first problem, we need a way to measure bandwidth consumption between these “competing accounts”.  The common solution on the market place today computes bandwidth consumption implicitly and indirectly from the protocols passing through the network.  This method of looking at the protocol to determine bandwidth consumption was initially chosen because it is computationally simpler and cheaper than trying to track usage statistics of tens of thousands of simultaneous application flows in real-time. While this solution is mostly accurate, it isn’t completely accurate.

As the identification of protocols become increasingly difficult because of protocol obfuscation techniques and as the cost of computational power declines, we’re starting to see a shift towards network management systems that don’t use protocol analysis to determine which users consume the most bandwidth.  These new systems compute bandwidth distribution by tracking the bandwidth consumption of every flow on the network without looking at what protocols are passing through the system and these are the systems we refer to as “Protocol Agnostic”.  But being “agnostic” strictly applies to this first phase of network management.

The second phase of network management is to ensure that every protocol works well.  This requires knowledge of what protocols are passing through the system so that they can be given the necessary protections from other protocols sharing the same network.  Since protocols and their applications have unique network requirements, we must identify the protocols being used so we can give them the necessary protections.  It would be foolish to apply the concept of agnosticism towards this phase of management because doing so would make this phase worthless.

The reality is that a network can’t simply be a dumb pipe if we want every protocol/application to work well and no one has ever claimed that “protocol agnostic” should mean a dumb pipe.  P2P applications have the ability to not only grab nearly every bit of bandwidth available regardless of how much capacity there is; they can cause significant problems for real-time applications like VoIP or online gaming applications even when the P2P uses a small fraction of the network.  I have published experimental data showing that even low-throughput mild usage of P2P applications can cause significant spikes in network latency and jitter (latency and jitter explained here) which can severely degrade the quality of real-time applications.  Even without any knowledge of how networks operate, anyone who has ever tried to play online FPS (First Person Shooter) gaming or use a VoIP phone while a P2P application runs on the same broadband connection understands how miserable the experience is.  In fact, this is a common source of contention between roommates and family members.

This is precisely why Comcast needs to protect VoIP traffic from service providers like Vonage.  Even though Comcast has nothing to gain because their own telephony service runs outside of the Broadband network on a different frequency, these additional efforts to ensure proper VoIP operation are actually helping a competing telephony service.  In theory, Comcast could just leave this issue alone and let Vonage and other competing VoIP services suffer while its own telephony service remains immune.  In practice, many Comcast’s customers will blame Comcast if they had any problems once the new system is in place and Comcast needs to do everything it can to make their customers happy.

Robb Topolski claims on behalf of me that these problems are somehow caused by Comcast when he says that “George has shown that Comcast’s proposed Protocol Agnostic scheme has unacceptable side effects”.  Aside from the fact that I’ve said or shown nothing of a sort, it’s ridiculous to paint these noble and reasonable efforts in to something sinister.  The Free Press has used similar fear mongering tactics to discredit these new network management efforts by Comcast by questioning why Vonage needs these special deals with Comcast in order for their traffic to go through insinuating that this is some sort of protection racket.  The fact of the matter is that Comcast not only doesn’t block Vonage, they’re trying to prevent Vonage from being blocked by self-inflicted damage whenever the user tries to use aggressive protocols like P2P simultaneously with VoIP.

But these advocates of strict Net Neutrality and the “dumb” Internet continue to shout that this is tantamount to “discrimination” against P2P protocols if we provide protocol protection for VoIP or any other real-time applications.  They’ll often ask “why can’t you give P2P the same protection” but this question is silly when you look at the reality of the situation.  P2P applications don’t need this kind of protection and they’re thriving at 30 times faster than the VoIP application on a 3 Mbps broadband connection.

If we take an objective look at the situation, the P2P application is already consuming over 96% of the network capacity so would it wise to mandate a dumb network that allowed the P2P application to consume 98% of the network and completely break the VoIP application?  If the network could only fairly allocate 2 Mbps of bandwidth to its broadband customers during peak usage ours, would it be so wrong to expect the P2P application to drop down to 95% of the broadband link while the VoIP application remains at the same miniscule 87 Kbps bitrate?  I guess according to Robb Topolski, this is somehow unfair because the P2P application is forced to slow down while the VoIP application doesn’t slow down.

If this is the kind of petty game we’re playing, we could simply have a network management scheme that allocates “equal” amounts of bandwidth to P2P and VoIP whenever both protocols are active which means both protocols will operate at 87 Kbps and we could call that “equality”.  Obviously no one would want that kind of system and there’s no need to force this level of equality so the P2P application is allowed to consume whatever remaining bandwidth is left over from the VoIP application.  But taking what’s left over is beneficial to the P2P application because the leftovers are at least an order of magnitude more than what the VoIP application gets.

Robb Topolski then pulls out the DPI (Deep Packet Inspection) Bogeyman and claims that any sort of protocol aware system would violate user privacy by snooping beyond the packet headers and looking in to the user data.  That’s nonsense on multiple levels because a system wouldn’t necessarily need to look at user data to determine the protocol being used and even if it did, there’s nothing wrong with an automated system that parses through user data.  In fact millions of emails a day are parsed by anti-spam systems to prevent your email inbox from being inundated with spam but we don’t consider that an invasion of privacy because of the context and scope of the system.

But the smart network management system could simply look at the protocol headers without parsing through any of the user data to determine what protocol is being used because it doesn’t have to fear protocol masquerading techniques.  This is because the protocol agnostic part of the system already tracks usage statistics which allows the network operator to implement a priority budgeting system where everyone gets a certain amount of priority delivery bandwidth.  So for example, if everyone had a daily budget of 12 hours of priority VoIP service or 12 hours of priority online gaming service or 3 hours of priority video conferencing, they’re free to squander that budget in 30 minutes with a burst of high-bandwidth P2P usage masquerading as VoIP traffic.  After the daily allowance is used up, all traffic would go through but be treated as normal priority data.

Chief BT researcher Bob Briscoe who is leading an effort to reform TCP congestion control at the IETF actually explained this concept of saving up for priority quite nicely and network architect Richard Bennett has also spoken positively about this type of a system.  Priority budget schemes would actually help solve the fundamental fairness problem where some users consume hundreds of times more volume than the majority of users even though everyone pays the same price.  The network would essentially give the low-usage real-time application customers a responsive delivery service while the high-usage P2P customer would get all the volume they want and everyone would get what they want.

When we focus on the ultimate endgame of fairness, it isn’t hypocritical to advocate equitable distribution of bandwidth through protocol agnostic means while simultaneously advocating protocol-specific prioritization schemes that ensure the best possible experience for all people and all protocols.  These two goals are completely separate and completely honorable.

Slow Web throughput on ISA Server 2006: SOLVED

Ever since I put up an ISA Server 2006 deployment, we noticed that Web access through it was incredibly slow. When we connected test machines directly to the Internet connection (the FiOS line that I’ve mentioned before), it was blazing fast. But outbound Web access through the ISA Server was slow slow slow. Well, I fixed it!

When I started troubleshooting it, I first looked to the Event Viewer for failures. Other than a few minor items about DNS, I didn’t see anything in there at all. We tried all of the usual diagnostics, and it was clear that network latency was not the problem. I also tried a good number of other tests, including DNS lookups, all of which looked good.

Performance monitoring showed that the local SQL Server installation (for catching the logs) was Hoovering RAM. So I got into SQL Server and capped its memory usage at 512 MB of RAM. We saw that the overall RAM consumption was down; it now ran with plenty of free memory, but the Web access was still slow. Outgoing access for all other protocols was super fast, incoming traffic was super fast. What gives?

Next, I decided to take a look at the DNS problems in Event Viewer. You can never be too careful, and who wants to have errors anyways? I fixed the problems with DNS up, and some sites seemed to improve, but the overall access was still quite slow. The trend now seemed to be that frequently accessed sites (based on hostname) were speedy, but everything else was slow. This deepened my suspicions of DNS problems.

At this point, I had nowhere else to turn to. Monitoring and performance logging told me what I already knew, that everything was fine except the retrieval of non-cached Web pages. I was typing up an email about the problem, enumerating all of the potential points of slowness and why or why I could rule them out. Everything could be ruled out easily, except for the DNS situation. Who knows what the internal DNS stack is really doing, especially when the ISA Server is its own DNS server?

So I decided to take yet another look at the DNS configuration. The configuration of the local DNS server was perfect. No problems found, no errors in Event Viewer. On a whim, I checked the DNS entries of all of the NICs. The only entry of note, was that the NIC on the LAN had an entry for an alternate DNS server, an entry for a machine that was no longer in use. That entry was put in as a “fallback” entry when the server was first installed, and it pointed to the previous firewall, in case the previous firewall had a DNS entry that we had not moved to the new DNS nameservers yet.

But how much trouble could this cause, right? After all, the lookup should be occuring on the WAN NIC, not the LAN NIC, and this is the alternate DNS server, which should never be hit anyways! Well, I removed the entry just for correctness, and BAM. Problem solved.

Microsoft, sometimes you baffle me. Why in the world would the alternate DNS server for the LAN NIC affect perforfmance on the WAN NIC, especially when the primary DNS server for all NICs is localhost, and localhost’s DNS server forwards to the WAN ISP’s DNS servers when needed? And why would this only affect Web access, and not FTP, SMTP, etc.? Regardless, if you are seeing insanely slow Web throughput on your ISA Server 2006 install, check your DNS subsystem as a whole completely before giving up.

J.Ja

Innovation 08 panel on Net Neutrality at Santa Clara University

Thursday morning I sat on a panel at the Innovation 08 Net Neutrality event at Santa Clara University.  This came right at the heals of my Brussels trip where I gave a presentation on Net Neutrality to the some members of European Parliament and various industry folks.  The jet lag wasn’t so bad but the bigger problem for me was missing my 6 year old daughter’s first big singing solo at her school which had to be at the same time as my panel.  I spent a lot of time training her so it was certainly a big disappointment for me.  The jet lag certainly did have a lot to do with why this blog wasn’t posted earlier yesterday.


Richard Whitt, George Ou, Ron Yokubaitis, Richard Bennett, Jay Monahan
Photo credit: Cade Metz

The story didn’t get too much coverage (yet) but here we have some coverage from Cade Metz.  I guess it’s a slight improvement because Metz at least didn’t try to falsely insinuate that I was against transparency for Comcast this time.  It was gushing with love for Google but at least he quoted me accurately and got my point across.

Ou is adamant that – whether it (Net Neutrality rules) forbids ISPs from prioritizing apps and services or it forbids them from selling prioritization – neutrality regulation would actually prevent things like video and voice from flourishing on our worldwide IP network. “If you forbid prioritization, you forbid converged networks,” he said. “And if you forbid converged networks, you get a bunch of tiny networks that are designed to do very specific things. Why not merge them into one fat pipe and let the consumer pick and choose what they want to run?

This is such an important point because latency/jitter is a killer for real-time applications like VoIP, gaming, and IPTV.  As I showed in my research, even mild usage of BitTorrent on a single computer in a home can ruin the experience for everyone in that home.  If prioritization technology is banned in Broadband, then we’ll simply end up with less functional broadband and we’ll have a statically separated IPTV service.  With a converged IP broadband network that delivers IPTV and Internet access, the consumer gets a massive converged pipe and they have the power of control at their fingers when they turn off the IPTV to free up all that bandwidth for their Internet service.  If the Government prohibits intelligent networks that guarantee quality of service, ISPs will be forced to separate their TV and Internet pipe with a fixed boundary and the consumer gets left with a permanent slow lane rather than getting a slow lane plus a fast lane that they can dynamically allocate to their TV or their Internet.

Metz also couldn’t resist from taking a personal jab at me and Richard Bennett:

“The panel also included George Ou and Richard Bennett, two networking-obsessed pals who have vehemently defended Comcast’s right to throttle peer-to-peer traffic, and Whitt received more than a few harsh words from Ou.”

The disparaging tone was uncalled for and when you put it side by side with the treatment he gave to Google, the bias is blatantly obvious and journalistically unprofessional.

Metz swooned for Google’s

“The question was raised by the top level management at Google: What do we think about network neutrality Ð about this notion that broadband companies have the power to pick winners and losers on the internet?” Whitt explained. “One position was that in the environment [proposed by Whitacre], Google would do quite well.”This side of the argument said: We were pretty well known on the internet. We were pretty popular. We had some funds available. We could essentially buy prioritization that would ensure we would be the search engine used by everybody. We would come out fine Ð a non-neutral world would be a good world for us.”

But then that Google idealism kicked in.

Idealism huh?  Too bad Metz left out the part where Google’s Whitt admitted that they were against network intelligence and enhanced QoS even though he refused to answer a simple yes/no question on whether he and Google support the actual Net Neutrality legislation.  Make no mistake, Google’s position is based on crippling their video competitors in the IPTV market which is critical to adding competition in the Cable and Satellite TV market which is far more expensive and relevant to every-day Americans.  It has nothing to do with Google idealism.

Ron Yokubaitis went off with the typical spiel about how DPI (Deep Packet Inspection) was all about violating user privacy, reading consumer’s email to inject ads, a tool of the big bad RIAA/MPAA for figuring out what song or movie you’re downloading, and how this was similar to communist China.  Yet DPI has nothing to do with reading email since that is a function of spam filters and it has nothing to do with violating people’s privacy.  DPI is merely a mechanism that analyzes which protocol someone is using and it really isn’t a method used by the MPAA and RIAA.

I also pointed out that it’s ironic that it is companies like Google who wants to inject ads and data mine your Gmail account and it was ironic that we bash the telecoms when it’s companies like Google that censors information from the Chinese people.  I’m also reminded that people were imprisoned in China for simply speaking out because of search engine providers like Yahoo turning them in to the Government.  Perhaps this wasn’t a great tangent for me to go off on but I get irritated by the wrongly focused attacks on ISPs when it’s often more appropriate for search engine companies.

Richard Bennett also posted something about this event and wrote

What really happened is this: Google has invested hundreds of millions of dollars in server farms to put its content, chiefly YouTube, in an Internet fast lane, and it fought for the first incarnation in order to protect its high-priority access to your ISP. Now that we’re in a second phase that’s all about empowering P2P, Google has been much less vocal, because it can only lose in this fight. Good P2P takes Google out of the video game, as there’s no way for them to insert adds in P2P streams. So this is why they want P2P to suck. The new tools will simply try to convince consumers to stick with Google and leave that raunchy old P2P to the pirates.

I’m not so sure if Google really feels threatened by P2P since P2P cannot deliver a good on-demand streaming experience beyond 300 Kbps or whatever the common broadband upstream speed is. That’s the problem with out-of-order delivery from a bunch of peers that may or may not be there unless it had several times more seeders than downloaders. Since the normal ratio is several times more downloaders than seeders, you simply can’t do high-bandwidth in-order delivery of video. This is why you’re not seeing YouTube take a dive in popularity and every instant-play site uses the client-server CDN delivery model.

The main reason P2P is so popular is because there is so much “free” (read pirated) content available. The actual usability and quality sucks compared to commercial video on demand services. Not only do you get lower quality and lower bitrates in the 1 to 1.5 Mbps range, you have to wait hours for the video to finish before you can start watching it and it even hogs your upstream bandwidth in the process. Legal for-pay services such as Netflix all use the client-server CDN (Content Distribution Network, caching technology) delivery model because it offers an instant play experience and the video quality is much higher quality at 4 Mbps. Other services like Microsoft’s Xbox Live Market Places use client-server CDN to deliver roughly 6.9 Mbps.

While it may be possible to get 6.9 Mbps from a P2P client, it’s rare that a single Torrent will be healthy enough to hit that speed and it certainly won’t arrive in order making it impossible to view while you download.