Archive

Archive for the ‘Networking’ Category

Beware of Intel NIC driver updates

June 10th, 2008 7 comments

I had a pretty harsh experience tonight. I have one of Intel’s server grade NICs in my ISA server at work. It has a lot of IP addresses bound to the external adapter. We updated the NIC’s driver due to some odd behavior we were seeing (some of the ports were not being detected sometimes after a cold boot). Well, the installer decided to pick one of the IP addresses assigned to each port to make the primary IP address, and to drop all of the other IP addresses bound to thew adapter. So instead of the 5 minutes of downtime we expected, I got to spend an hour re-typing the critical IP addresses, and another hour tomorrow typing in the non-critical IP addresses. Let this be a warning: if you plan on updating the drivers for your Intel NIC, plan on possibly needing to re-configure it afterwards!

J.Ja

Categories: Intel, Networking Tags:

How to optimize your DSL broadband performance with Jumbo Frame support

June 6th, 2008 8 comments

In my last article on “Why BitTorrent causes so much latency and how to fix it“, I talked a little about packet sizes and some people asked me what the optimum packet size for maximum throughput is on a DSL broadband connection.  In this article I’m going to show you how to optimize your DSL broadband connection in Windows Vista using the netsh command.  I will also show you to find the optimum compromise between a Jumbo Frame LAN and a DSL PPPoE broadband connection.

It turns out that the default 1500 BYTE packet isn’t optimum for DSL PPPoE based broadband connections because of the additional 8-BYTE PPPoE overhead.  If you send out maximum size packets using 1500 BYTES, then each packet will fragment in to two packets because it’s too big for the PPPoE DSL connection so you end up with double the packets and double the overhead.

Note that Cable Broadband users don’t need to change this because they don’t need to deal with the PPPoE overhead.

The first thing you need to do is open up a command prompt elevated to administrator.  For Windows Vista, you do this by hitting the start button and typing the command “cmd” but don’t hit enter yet.  Move your mouse up to the top of the menu and right click on cmd.exe and then left click “Run as administrator”.

The next step is to type or cut-paste the command:

netsh interface ipv4 set subinterface “Local Area Connection” mtu=1492 store=persistent

Note that this assumes you are using your typical default LAN connection which is usually labeled “Local Area Connection” but it could be labeled something else if you have more than one Ethernet port or you relabeled the connection name to something more specific.  If it is called something else, then you need to adjust the command above accordingly.

This command takes effect immediately and there is no need to reboot.  The “store=persistent” command tells Windows Vista that this is a permanent setting.  You can confirm after any reboot with the following command which can be done in a normal console.

netsh interface ipv4 show subinterfaces

Windows XP users can use a tool called DrTCP.

Optimizing for Jumbo Frame and DSL PPPoE Broadband

The 1492-BYTE MTU is optimum for DSL broadband but not optimum for high-speed gigabit LAN.  In fact, there are times when you want to use Jumbo Frames to avoid the problematic Vista MMCSS packet per second throttling behavior.  But a lot of Jumbo Frame implementations are limited to 4082-BYTE Ethernet Frames.  So the trick is to use an MTU size that is a multiple of the broadband-optimized packet.  The IP packet is always 28 bytes smaller than the largest MTU so the broadband-optimized IP packet is 1464 Bytes.  We multiply that by 2 and we get 2928 so the proper MTU size would be 2956 which is still smaller than 4082.  So for users who want to have a good compromise between Jumbo Frames and PPPoE DSL, they can use an MTU of 2956.  That means the LAN won’t fragment the packet but the broadband connection will do a clean two-for-one split that results in optimum size packets.

So the command to implement the 2956 MTU would be:

netsh interface ipv4 set subinterface “Local Area Connection” mtu=2956 store=persistent

You can actually test for larger numbers to see if your setup supports larger jumbo frame sizes like 6K or 7K Jumbo Frames. If it does, then you might be able to use an MTU of 5884 which is derived from 4 times 1464 plus 28.  Here’s an illustration of how this clean fragmentation works

Verifying Jumbo Frame support

You should verify that you have a switch that supports Jumbo Frames and an Ethernet Adapter in the computer with proper drivers.  You can enable Jumbo frames by going to the Control Panel, Network and sharing center, View status, Properties button, Configure button, Advanced tab, Jumbo Frame option.

To verify the whole thing by running the following test between two computers on your LAN that are configured to support Jumbo Frames which we’ll call Computer-A and Computer-B.

On Computer-A, run the following command to verify jumbo frame support.

ping Computer-B-IP-Address -l 4054 -f

Note that “-l” in the command above is a dash lower-case L.

If you get a proper reply without an error message, that means your computers and Ethernet Switch supports 4082-BYTE MTU.

Categories: Networking, P2P Tags:

Why BitTorrent causes so much jitter (high ping) and how to fix it

June 1st, 2008 151 comments

Anyone VoIP or online gamer who has a roommate or a family member who uses BitTorrent (or any P2P application) knows what a nightmare it is when BitTorrent is in use. The ping (round trip latency/jitter) goes through the roof and it stays there making VoIP packets drop out and game play impossible. I’ve personally experienced this and many of my friends have experienced it. When I had a roommate in 1999 and I had my first broadband connection, my roommate started using the hot P2P (peer-to-peer) application of its day called Napster. As soon as he started sharing, my online game would become slow as molasses and I’d experience a huge 500ms second spike in round-trip latency and any gamer knows that pings over 100 are problematic.

When it’s just myself using BitTorrent, I can pause it while I make VoIP phone calls or when I want to game. But when it’s time to ask your family member or roommate to stop using their application, it becomes awkward because why should they stop using their application just so you can use your application? My roommate paid for half the broadband fees so I couldn’t always tell him to stop and he couldn’t always ruin my latency making it a contentious situation for both of us. What makes it even more frustrating is that there’s plenty of bandwidth capacity to theoretically satisfy both our needs, but something about these P2P applications make them them very bad housemates.

I decided to do some research on this and ran a series of tests which resulted in some very interesting data. I fired up various applications at varying rates of bandwidth utilization in the upstream and downstream direction and I measured ping to my first router hop (ISP router) beyond my home router. I ran continuous pings to the ISP router to check the round-trip latency above the normal baseline latency of 12 ms and plotted out the results. This test methodology measures network jitter (the variation of delay) on the “last mile” which is what I’m primarily concerned about because that’s where most of the damage is done by local usage of P2P applications.

BitTorrent downloads cause huge amounts of latency

The first set of tests I did were download tests. First I downloaded a file using HTTP which ran at 260 KB/sec (2.08 Mbps) which is roughly 87% of my peak download performance. I then set BitTorrent to also download at 260 KB/sec to compare the effect on my ping times.  To my surprise, there was a significant difference in the amount of ping increase between HTTP downloading and BitTorrent downloading despite the fact that both applications were downloading at the same average rate.

When you look at the two graphs immediately above, you see that BitTorrent causes an average increase in ping of more than 117 ms (milliseconds). Note that those 6 ping spikes in the graph were actually bad enough that they caused the ping to time out which means the ping was higher than one second. HTTP on the other hand didn’t increase the ping nearly as much but it still caused an average of 46.7 ms increase in ping and it only peaked at 76 ms. So how is this possible when both applications are using the same amount of bandwidth? I did some thinking and this is the hypothesis I came up with.

Burst traffic fills up transmit queues

The difference in ping time is caused by the fact that HTTP is a single source of data whereas BitTorrent is about 20 sources of data. Incoming data from from multiple sources via the fast core of the Internet can sometimes clump closely together when multiple sources happen to transmit data around the same time. So in the period of 20 ms (the interval between typical VoIP packets), up to 200 max-size 1472 BYTE packets can occasionally build up in the DSLAM transmit queue (blue square in illustration above labeled as “download queue”) causing my ping requests to time out above the 1 second mark. But on average, we get around 23 of these packets causing sitting in the DSLAM transmit queue causing an average increase in ping of 117 ms. When there is a single transmitter, it might burst and clump packets close together but it won’t be at the level of 20 transmitters.

With HTTP causing 76 ms of downstream delay, that means 15 of these 1472-BYTE packets are sitting in the DSLAM transmit queue causing a less extreme increase in ping. This is still problematic for VoIP communications and it can certainly ruin online gaming. So despite the fact that there is plenty of remaining bandwidth for my VoIP or online gaming traffic, it’s the non-uniformity of the incoming Internet traffic that causes my VoIP phone calls and games to perform badly. Unfortunately since this is the downstream we’re talking about, the consumer can’t do much about it on their own end of the pipe because the delay is at the DSLAM which belongs to the ISP.

The only way to fix this problem is for the ISP to implement intelligent packet scheduling and application prioritization at the DSLAM to re-order those VoIP or gaming packets to the front of the transmit queue. With packet prioritization (generally referred to as QoS), your family member, your roommate, or your own video downloads won’t need to be stopped and they won’t interfere with your VoIP or gaming applications which makes everyone happy. Unfortunately, these types of QoS services may never see the light of day if poorly conceived Net Neutrality legislation gets passed that ban the sale of packet prioritization.

BitTorrent uploads cause excessive jitter

BitTorrent or P2P uploads also cause a lot of upstream jitter. I compared various types of upload traffic patterns to see what kind of increase in the upstream ping times would result. First I tried running BitTorrent with a 47 KB/sec (376 Kbps) bandwidth cap which was about 90% of my upload capacity, then I ran BitTorrent with a 28 KB/sec (224 Kbps) bandwidth cap at 54% of my upload capacity, and then I ran BitTorrent with a 10 KB/sec cap at 19% of my upload capacity.

In either the 28 KB/sec test or 10 KB/sec test, I’m not being greedy by hogging all the available upstream bandwidth and I’m leaving more than the 11 KB/sec of bandwidth needed for VoIP and online gaming. Yet I found that the additional ping caused by BitTorrent uploads (seeding) was unacceptable for gaming and problematic for VoIP applications. Even when I severely limited upload throughput to 10 KB/sec, it didn’t reduce the ping time spikes although it reduced the frequency of those spikes. However, even fewer spikes in ping time can pose the same problems for VoIP applications because they have to adjust their buffers to account for the worst-case network conditions. This would seem to indicate that BitTorrent is bursting packets rather than releasing the packets in a uniform and evenly spaced stream.

Next I tried using FTP at full throttle and I managed to get an FTP session going at 47 KB/sec (90% of my peak load) yet the jitter caused by FTP at this extreme rate of throughput was less than the jitter caused by operating BitTorrent at an average of 10 KB/sec. This would seem to indicate that FTP is outputting data in a more consistent manner that BitTorrent.

Lastly, I ran some ping tests during some VoIP phone calls using the Lingo service (competitor of Vonage). I had set Lingo to use the uncompressed G.711 which uses 11 KB/sec for both upload and download which makes it very comparable to BitTorrent uploading at an average of 10 KB/sec. But as soon as I ran the ping tests, I was shocked to see virtually no increase in ping times. I realized that this is because the VoIP ATA (Analog Telephony Adapter) device pulses small packets at exact intervals of 20 milliseconds at 50 times a second.

Smoothing out the packet spacing to reduce jitter

After running these tests, I am beginning to conclude that it isn’t so much the amount of data that causes excessive jitter; it’s the uniformity of data transmission. If the transmissions are spaced evenly, other packets from other applications can slip in between the packets rather than getting stuck behind multiple packets in the transmit queue. So would it be possible to engineer BitTorrent to transmit data uniformly and what would be the effect? I came up with the following chart to illustrate what this could mean for peaceful coexistence between VoIP/Gaming and BitTorrent/P2P.

I calculated that the max-size 1472 BYTE packet takes 28.3 milliseconds to transmit over my 52,000 BYTE/sec broadband uplink. If BitTorrent bursts 3 packets upstream over a 100 Mbps FastEthernet LAN connection, they will all sit in the upload queue of my DSL modem for 85 ms. Meanwhile, my VoIP or gaming packets get stuck behind those three packets for an additional 57 ms in the queue. This is shown in the first example in the illustration below with large 1472-BYTE red packets representing BitTorrent and small 223-BYTE green packets representing VoIP.

In the second example, I show what happens if BitTorrent simply spaced their transmissions evenly. It’s still less than ideal but at least it significantly reduces the jitter for VoIP or gaming packets.

In the third example, I show the ideal scenario where BitTorrent would reduce its packet size to 815 BYTES or less and pulse them out in 20 ms intervals at 50 packets per second. BitTorrent could essentially create a “VoIP friendly mode” that allows VoIP packets to fit cleanly between the BitTorrent packets and the increase in jitter would be no greater than 15.7 ms and would typically average around 8 ms increase. BitTorrent could also have a “Game friendly mode” that uses 679-BYTE packets at 60 packets per second.

Now it is possible to solve this problem on the network level by prioritizing VoIP and gaming packets in the home DSL modem upload queue. Unfortunately, I don’t have administrative access to the modem and implementing VoIP or gaming prioritization on my home router seemed to have no effect.  Packets in the home router get forwarded as soon as they arrive with 100 Mbps Ethernet on both ends so there is nothing to reorder in the queue.  More advanced business-class routers like those from Cisco will allow you to configure the speed of the FastEthernet connection to match your DSL throughput so that the queue will migrate from the DSL modem to the router but this isn’t very practical for most people.  So it would make sense for application writers to try and make their application work as well as possible on the majority of home networks and broadband networks without QoS.

While modifying BitTorrent or the P2P application may not significantly fix the downstream problem, it would definitely fix the upstream jitter problem which means that people will be more willing to seed and contribute to the health of BitTorrent. The download jitter may improve if the dozens of peers sending information to me did it in a more uniform manner, but it is still possible that those peers will transmit at the same time which still causes packet clumping and bursting.

So why would BitTorrent or any P2P application vendor want to do this? Why couldn’t they just say “it’s not my problem”? Because it would fix their reputation as a bad housemate and more people would be willing to seed if it didn’t interfere with their other applications like VoIP or gaming. Corporate IT departments may also ease their ban on the technology if it doesn’t trash their Internet connection for the entire office. I am certainly a huge proponent of network-based solutions for dealing with jitter problems, but improvements in the application can act to further reduce network jitter even in the presence of existing network-based solutions. Furthermore, the vast majority of consumers do not have access to network-based solutions and it only makes sense for a P2P application vendor to cater to this market.

I will try to reach out to BitTorrent corporation to see if there is any interest in this and perhaps someone would be interested in modifying open source Azureus. It would be a very good feature to have or it would be a very interesting academic experiment at the very least.

Categories: Networking, P2P, Policy Tags:

Revived Net Neutrality bill cripples Internet for real-time applications

May 9th, 2008 43 comments

Congressman John Conyers and Zoe Lofgren have reintroduced a Net Neutrality bill that prohibits charges for “prioritization or enhanced quality of service” in the name of stopping discrimination.  Unfortunately, it stops a lot more than discrimination; it flat out bans tiered pricing for different levels of QoS (Quality of Service) which cripples the Internet under the justification of banning “discrimination”.  Here’s the text of the bill from the first time this bill was introduced that Richard Bennett dug up.

“If a broadband network provider prioritizes or offers enhanced quality of service to data of a particular type, it must prioritize or offer enhanced quality of service to all data of that type (regardless of the origin or ownership of such data) without imposing a surcharge or other consideration for such prioritization or enhanced quality of service.”

Why force the bundling of QoS or elimination of QoS?

This revived bill means that QoS service must either be bundled in to the price of broadband even if some consumers don’t want it or it means QoS doesn’t get offered at all even if some people do want it.  In either case, we have a travesty in the making with this revived Net Neutrality bill.

The reality is that some people only surf the Internet, download movies, send some emails and they don’t need enhanced QoS because QoS is all about providing low latency and consistency and not about increasing data throughput.  Other people who use VoIP services like Skype, Vonage, Lingo (I am a customer), online First Person Shooter gaming, or video conferencing don’t care much for increased data throughput but they want lower latency and data consistency.  If QoS is given to everyone because of government mandate, why should the first group of people who don’t care about low latency subsidize the second group of people?  If QoS is not offered to anyone because of government mandate, why should the second group of people be denied the ability to buy enhanced QoS?

The meaning of “discrimination”

It seems strange that I have to define what the word “discrimination” means but I’m left with little choice given the fact that the word is being misused in this context of Net Neutrality.  Discrimination in context of Internet services would be if an ISP created a pricing scheme that said that Whites can have an enhanced QoS service for $10/month but every other race of people must pay $20/month.  If congress wants to ban this sort of discriminatory pricing, I would be all for it.  But what the Conyers-Lofgren Net Neutrality bill says is that no one can pay ANYTHING in addition to the base price of Broadband services.  That’s an outright ban on the sale of QoS which effectively eliminates QoS and cripples real-time applications on the Internet.  This is not a ban on discrimination; it’s effectively a ban on progress on the Internet.

You can’t grow your way out of congestion

There are many in the Net Neutrality camp who claim that prioritization and network management wouldn’t be necessary if only we had a dumb fat pipe and that the need for QoS is a lie.  The problem with the “dumb fat pipe” theory is that it has no basis in the world of science and engineering.  In fact, the opposite has been proven because even the fastest broadband networks in the world like Japan are being slammed by peer-to-peer induced congestion.

Japan’s broadband infrastructure is around 10 times faster than US wired Internet infrastructure and probably has 10,000 times more capacity than US wireless Internet infrastructure.  Japan’s broadband infrastructure can’t grow their way out of congestion yet Net Neutrality proponents argue that a mere quadrupling of capacity would eliminate the need for network management or prioritization entirely.  This is a pipe dream and no amount of bandwidth will ever be enough because an infinite number of packets can be generated.  Even a $100 up-converting DVD player can generate around 1.5 gigabits per second (1500 Mbps) and I could fill a gigabit broadband tomorrow if you gave it to me.

What is QoS and why is it critical?

Given the fact that we can never grow our way out of congestion, we must have the ability to prioritize packets to enhance the QoS.  There is nothing insidious or evil about packet prioritization/QoS and it has nothing to do with increasing data throughput and the speed of file downloads as the Net Neutrality proponents want us to believe.  Prioritization/QoS is merely a way of changing the order of delivery which is critical for real-time applications like VoIP, video conferencing, and some types of online gaming that rely on fast human reaction time.  The fact that you improve the latency of a low-bandwidth application virtually has no effect on download services that favor higher throughput.  Take the following example of a router transmission stream where “A” represents VoIP packets and “b” represents music download service packets.

The first critical lesson here is that the same number of “A” and “b” packets get transmitted in the same duration of time with or without QoS.  That means the music download service represented by the “b” packets isn’t affected by QoS one way or another.  But even if the “b” packets are slowed down a little, it doesn’t make much difference if a song is downloaded in 100 seconds or if it’s downloaded in 105 seconds.

The second lesson here is that prioritization doesn’t hurt or help download services because prioritization alters latency and not data throughput.  Many Net Neutrality advocates misunderstand prioritization as a way of gaining an unfair advantage in video distribution but they’re fundamentally mistaken about the technology.  The only thing that speeds up file and video distribution are content distribution servers that distribute the work load through a technology called “caching”.  Nobody distributes content on a large global scale without the use of caching technologies because it’s insanely expensive and practically impossible.  Nobody uses prioritization technology to deliver files and videos because it costs a ton of money with zero results.

The third critical lesson is that QoS makes all the difference in the world to the VoIP packets represented by the “A” symbol.  In the first example without, you see large durations of time where “A” packets aren’t arriving in a reliable time interval shown by the black line markers.  Those two VoIP “A” packets that don’t show up on time are simply discarded and the sound quality degrades making the person on the other end difficult or impossible to hear and understand.  In the second example where real-time applications are prioritized, all the VoIP packets show up on time and the call quality is perfect.

Data comes out of a VoIP device in a smooth predictable manner but it can easily become garbled as it passes through an unmanaged and congested network where a few bandwidth hogs (typically peer-to-peer applications) can easily overwhelm the real-time applications.  Without prioritization, bandwidth hogging applications can squeeze out real-time latency-sensitive applications.  But with prioritization, we can have a state of harmony where bandwidth hogging applications aren’t affected but real-time applications are protected.

Unintended privacy consequences of Net Neutrality legislation

Some of these Net Neutrality bills make exceptions for emergency services but why should only emergency calls be reliable?  Why shouldn’t the Internet provide circuit-switched reliability like the traditional telephone network?  Why shouldn’t consumers get responsive gaming when they’re willing to pay for it?

The bigger problem is the issue of privacy.  How do these legislatures think ISPs are going to identify an emergency 911 call?  Are these bills suggesting that ISPs inspect beyond just the TCP headers that identify application type and snoop in to actual user data to figure out which telephone number consumers are dialing?  What happens if VoIP implementations encrypts the user data and the network can’t identify the destination, does that mean no prioritization for that emergency call?  What if the user doesn’t want their ISP to know when they’re dialing 911?  These are some of the privacy and technical issues that have yet to be addressed.

Misrepresent the architecture of the Internet and the end-to-end principle

There’s a huge misunderstanding being propagated by Net Neutrality proponents like Larry Lessig and Susan Crawford that the Internet has always been dumb and it was built on the “end-to-end principle” (AKA “Net Neutrality” or “Network Neutrality”).  They claim all intelligence must reside on the computer end-points and that the network itself must be dumb and devoid of intelligence.  Lessig at the Stanford FCC hearing on broadband management in April 2008 stated that anyone proposing to change or deviate from this end-to-end principle must carry the burden of proof to justify changing the Internet.

The problem is that Lessig and Crawford do not understand what the actual end-to-end principle means and they are misusing it to further their own cause.  One of the principle authors of the end-to-end principle (originally titled “end-to-end arguments”) is David D. Clark.  Clark along with Marjory S. Blumenthal in 2000 wrote “Rethinking the design of the Internet: The end to end arguments vs. the brave new world” where they explain that:

“What is needed is a set of principles that interoperate with each otherÑsome built on the end to end model, and some on a new model of network-centered function. In evolving that set of principles, it is important to remember that, from the beginning, the end to end arguments revolved around requirements that could be implemented correctly at the end-points; if implementation inside the network is the only way to accomplish the requirement, then an end to end argument isn’t appropriate in the first place.”

The above quotation clearly debunks everything that Lessig and Crawford would have us believe about end-to-end and the architecture of the Internet.  The Internet never strictly adhered to the end-to-end model.  While the Internet may have heavily favored the end-to-end model in the beginning, it has steadily moved away from end-to-end towards a mixture of network-centric and end-to-end approaches based on the solution that best fit the requirements.

Larry Lessig is also wrong when he tells us that introducing prioritization or QoS to the Internet would be a change.  That’s because enhanced QoS service contracts are already in use by business Internet customers and have been for some time.  These legal and legitimate contracts would be outlawed by this latest Net Neutrality bill and those business customers would be prevented from buying a service that they need.  The irony here is that Lessig is the one that’s actually pushing for a Government mandated change on the Internet to ban existing contracts and relationships yet Lessig was the one that argued that those who seek change must bear the burden of proof.

Misrepresenting the Net Neutrality debate as a first amendment issue

The dirty little secret of the “Net Neutrality” camp is that they never want to talk about what they’re actually trying to legislate which is the ban on perfectly legitimate tiered pricing models for enhanced QoS.  They would rather misrepresent Net Neutrality as a civil rights movement that fights against unfair price discrimination and fights the blocking of websites.  But I’m all for a ban against discriminatory pricing where different people or companies are charged different rates.  I’m all for a ban against service blocking on the wired Internet and I’m all for a ban against the blocking of legal content websites.  This is what a reasonable Net Neutrality bill would do but why ban tiered pricing on enhanced QoS?  This is the question that Net Neutrality proponents can’t answer which is why they dodge it and misrepresent Net Neutrality as this fight for the “first amendment of the Internet”.

Even Tim Wu won’t stand up for Net Neutrality legislation

Tim Wu, a self proclaimed father of Net Neutrality, couldn’t run fast enough away from the issue of prioritization service banning at the Net Neutrality summit at University of San Francisco on a panel with me.  Wu kept saying over and over again that he doesn’t care about the legislation and that it wasn’t important.  Only when I pointed out his hypocrisy that he was on YouTube saying how the legislation was important did Wu admit he was for these anti-QoS bills.  But Wu immediately stated that the legislation wasn’t important and that what was really important was the “spirit of Net Neutrality” and “how we feel about Net Neutrality”.  So if the “father of Net Neutrality” says the legislation isn’t important and isn’t willing to defend it, will Mr. Conyers and Lofgren drop their revived bill?

Does the inventor of the web really support Net Neutrality?

Tim Berners-Lee (inventor of HTTP, the language of the World Wide Web) enthusiastically supports Net Neutrality but his own words contradict him.  Here are two quotes from Tim Berners-Lee:

http://dig.csail.mit.edu/breadcrumbs/node/144
“Net Neutrality is NOT saying that one shouldn’t pay more money for high quality of service. We always have, and we always will.”

http://dig.csail.mit.edu/breadcrumbs/node/132
“We pay for connection to the Net as though it were a cloud which magically delivers our packets. We may pay for a higher or a lower quality of service. We may pay for a service which has the characteristics of being good for video, or quality audio.”

It’s clear that Tim Berners-Lee is wrong when he says “Net Neutrality is NOT saying that one shouldn’t pay more money for high quality of service” because this is exactly what the Markey House 2006 amendment, Snowe-Dorgan Senate 2006 amendment, and now the latest Conyers-Lofgren Net Neutrality bill seeks to do.  While Tim Berners-Lee has never specifically commented on any Net Neutrality bill, it is clear from Tim Berners-Lee’s words above that he would have to be against the latest Conyers-Lofgren Net Neutrality bill.

What do the fathers of the Internet think?

Bob Kahn calls “Net Neutrality” a slogan during an interview with him at the Computer History Museum.  He also stated that he was “totally opposed to mandating that nothing interesting can happen inside the net. But that interesting stuff should be by consenting networks, not something that prohibits others.”  This is more proof that Larry Lessig and Susan Crawford’s idea of what the Internet should be is completely out of touch with the founders of the Internet.

Vint Cerf says he’s for Net Neutrality and even praises Congressman Markey as someone who knowledgeable on Network Neutrality.  But Vint Cerf has never publicly stated if he would actually back Congressman Markey’s 2006 bill which bans tiered pricing on enhanced QoS.  I would challenge Vint Cerf to explain if he would support a ban against the sale of QoS or challenge anyone to produce a single documented quote from Vint Cerf saying he would support a ban on tiered pricing on QoS.

Conclusion

Net Neutrality, as it exists in the Conyers-Lofgren bill has little to do with protection against discrimination.  It is an unnecessary set of regulations that bans the financial incentives to build networks that are hospitable to real-time applications and this will have severe consequences for the future of our Internet infrastructure.  The future of the Internet deserves better than bumper-sticker slogans and misplaced fear-mongering on non-disputed civil rights issues.  Let’s at least have a debate on the real issues on network architecture and prioritization.  I would welcome any debate where network engineers argue why.  Let’s see the Net Neutrality proponents produce a single argument or a single Internet architect who is for the banning of tiered pricing on prioritization/QoS.

Categories: Networking, Policy Tags:

How Comcast customers can seed 100 times faster and bypass TCP resets

May 1st, 2008 20 comments

As many of you reading this blog probably already know, Comcast has been disconnecting a certain percentage of TCP streams emanating from BitTorrent and other P2P (peer-to-peer) seeders.  This effectively delays and degrades the ability of Comcast customers to seed files using P2P applications.  For normal healthy Torrents that are distributed across multiple users and multiple ISPs, losing a few seeders intermittently isn’t too noticeable.  But for rare Torrents or Torrents that have to originate from a Comcast broadband customer, this can pose some challenges.  The rare Torrent becomes even less reliable than they already are while popular Torrents originating from Comcast’s broadband network take much longer to become healthy.

While Comcast has stated they will try to move off of their Sandvine system that uses TCP resets by the end of this year, there’s no guarantee that they will complete on schedule and there’s no relief in the mean time for customers who are having a tough time seeding their files for distribution.  Even without the challenges posed by TCP resets, seeding a torrent file is still problematic and burdensome.  Not only does the seeder have to turn his/her computer in to a server, they must also allocate significant portions of their upstream bandwidth – as well as their neighbor’s bandwidth which they share – to seeding while providing relatively minimal capacity to the Torrent.

The opposite challenges of client-server and peer-to-peer

One way around this is to use the gigabyte of web space that Comcast provides you, but there are downsides to a pure HTTP client-server model.  While an HTTP client-server model works great because it’s persistent and the web server offers much higher capacity than any individual seeder, there’s a fixed amount of web server capacity which quickly divides as the number of clients increases.  That means client-server works great when there are fewer clients but it degrades in proportion to the number of clients who want to use the server.

The P2P model has just the opposite problem in that it works better and better as more clients/peers are added because each client becomes an additional server, but it’s horrible when there are few clients and quickly accelerates to death as it loses popularity.  The less popular a Torrent is the less likely it is to attract peers because people tend to avoid less popular Torrents fearing slower speeds or being stuck with a Torrent that can never complete.

Combining the two file distribution models

Given the fact that client-server is weak where peer-to-peer is strong and the fact that client server is strong where peer-to-peer is weak; I thought what if we combined the two models in to a single client or a single protocol that gave you the best of both worlds!  All we need to do is have a BitTorrent pull from an HTTP source that remains persistent so that there will always be a fast seed to fall back upon even when there are no other peers in the Torrent.

This not only guarantees a survivable Torrent which attracts more peers, it also provides a rocket boost in performance because web servers tend to be 50 to 100 times faster than any Broadband consumer seed.  The original seeder would be able to seed and forget and avoid saturating his own upstream connection as well his neighbor’s upstream connection.  Given the fact that a typical Cable ISP operating DOCSIS 1.1 protocol allocates 1.25 megabytes/sec of upstream capacity for 200-400 users in a neighborhood, a hybrid web seed would not only benefit consumers but the ISP as well because it alleviates the biggest bottle neck on Cable ISP’s network.

But before I could celebrate my “new” idea and invention, Ludvig Strigeus (creator of the best and most popular BitTorrent client uTorrent) with tongue placed firmly in cheek explained that I was a few years late in my discovery and that the concept of web seeding was already proposed by John Hoffman and “DeHackEd”.  Furthermore, the feature is already implemented in uTorrent as well as the official BitTorrent client called “BitTorrent” and Azureus all support web seeding.  Those three clients command the lion’s share of BitTorrent clients and there is a full list of compatible clients here.

Note: It’s not your imagination that BitTorrent looks just like uTorrent with a few extra features; BitTorrent Corp acquired uTorrent from Ludvig Strigeus at the end of 2006 because it was technically superior.

Web seeding in practice

One of the problematic Torrents that has been hotly contended in the Comcast TCP reset fiasco is the King James Bible.  The is a classic example of something that isn’t popular in BitTorrent because it’s relatively small and readily available on the world wide web over HTTP but it still serves as a good example of what happens to rare Torrents using the pure peer-to-peer model.  I created a web seeded Torrent for the King James Bible using Richard Bennett’s gigabyte of web space provided at no extra charge by his ISP Comcast to serve as a persistent and lightning fast seeder.  I also took this traditional P2P Torrent of Leonardo da Vinci’s complete Notebooks and converted it to this web seeded torrent.

The difference in speed for this rare torrent was an astounding 100 fold increase in performance.  The reliable of this Torrent also shot up because it doesn’t depend on an individual seeder in a residential Cable broadband network who may shut off his computer or P2P application or get TCP reset.  The web seed sits outside of the Cable broadband network in some ultrafast server farm that is spared from TCP resets.  The ultimate irony here is that Comcast who is known to slow down everyone’s BitTorrent seeds is now giving you a free 100 fold boost in performance!  Not only that, but they’re alleviating their last-mile congestion as well as giving you more reliability.

How to create your own web seed

By now I’m sure you want to know how to create your very own web seed Torrent.  Simply fire up a copy of uTorrent or BitTorrent and press Control-N or go to the File menu and hit “Create new Torrent”.

You will see the screen on the right and all you need to do is choose a source file from your hard drive, enter in a list of trackers you plan on using, then hit “Create and save as”.

Once you save the .Torrent file, the next step is to copy the source file (the book, video, or song) you want to distribute up to a publicly accessible web server such as the free web space Comcast provides their customers.  Many other ISPs also provide free web space if you look around or ask.

Update 5/1/2008Robb Topolski posted a comment for this article saying that the example I’m using is actually the getright web seeding specification.  Thanks for the clarification Robb!

At this point in time there is no automated support for creating web seeds so you’re going to have to use a program called BEncode Editor to modify the .Torrent file you just created.  Once you open up the .Torrent file with BEncode Editor, you will see the following window.

All you need to do is hit the + button and add a “url-list” to the root.  You will see the following window where you will add a list of URLs that have a copy of the file that you want to distribute.  Note that if you want to distribute multiple files, you point to the folder containing all those files and not just the file.  Also note that Apache web servers don’t properly work for filenames with spaces for web seeding while the IIS 7.0 server I tested did work for web seeding even if the filename had spaces in it.

The final step is to publish the .Torrent file on your preferred tracker.  For the purpose of this discussion, I used ThePirateBay.org as the tracker and uploaded the files there after creating an account on their site.  After I finished uploading the .Torrent file, I got this page which gives you a public link for the .Torrent that you can distribute on a website or via email.  You can also email people the .Torrent file directly or you can direct link to the .Torrent file from the tracker.  At this point you don’t even need to have your computer participate as a seeder and the seed is extremely fast and reliable.  You can even publish the link to the HTTP site if people don’t want to or can’t (office policy) use the Torrent version and those people can download from their web browser directly.

The final outcome is that Comcast doesn’t have to worry about you hogging all the upstream bandwidth in your neighborhood, you free up your computer and bandwidth, the seed runs up to 100 times faster, and now everyone’s happy because of a technical solution.

Categories: Comcast, Networking, P2P, Policy Tags:

Dear Cisco CEO:

April 30th, 2008 31 comments

I would like to see you try to configure one of your routers. Go ahead, I dare you. I triple dog dare you. See, your routers lack any kind of sensible tools as far as I can tell. See, I am not a full time Cisco admin. I do more with my time than play with your equipment. In fact, I would prefer it if I never have to mess with your routers again. I would be happy using a $50 Linksys doodad for my network (I know, you own them too), but I can’t, because I have a gazillion public IP addresses and a T1 that we use for redundancy. But still, I fantasize about it.

Why?

Because your command lines are miserable and provide zero real debugging or troubleshooting help. If it is there, I cannot readily find it. As far as I can tell, “debug all” floods me with so much junk that I need to reboot the router to stop the madness. Anything less seems useless. I don’t feel like adding an access list just to bind to a debug command. I would much rather have something actually useful, like ISA’s real time monitor that lets me see precisely what is happening with my traffic and the reason for it. It does not even need to be graphical, it could be done with the standard ASCII character chart.

The point is, you guys make a product that is pretty darned good, and most of us more or less have to use it regardless of how we personally feel about it. It’s like Windows on the desktop. But Microsoft doesn’t make their captive audience resent the golden box nearly as much as you do. Microsoft makes strides to continuously omprove their product, even if they get partially wrong all of the time (or 100% right only some of the time…). They act like they have some competition, when really they could legitimately ignore most perceived threats. Google? Hah, Microsoft never made money with MSN or “Live” anyways. Apple? They had one quarter of expanding sales, big deal; even the NY Knicks win a game occassionally. But Cisco? Wow, you guys truly do like to act like a monopoly!

See, right now, you guys are so blinded by the quality of your product, that you fail to see how any improvements need to be made to it. That’s because you live in an bubble, sparsely populated by people with CCNA’s, CCIE’s, and whatever else they may be. The other 95% of the folks working with your products are people like me, who touch a router for a week to configure it (which should have only taken 30 minutes), and then ignore it for 3 more years unless it fails.

I know that Juniper is barely a blip on your radar, and all of your other “competitors” have mostly caved in too. But could you please act like a company that has to earn their customers? Please?

J.Ja

Categories: Cisco, Networking Tags:

Implementing VLAN trunking

June 2nd, 2003 24 comments

Contents

  • Introduction
  • Cisco switch configurations
  • Cisco router configurations
  • Windows configuration with Intel Pro Series adapters

Introduction
In my last article “Introduction to VLAN trunking”, I wetted your appetite for a hot new technology that is revolutionizing the way network topology are being designed and interconnected. In this piece, I will show you how to actually implement this new technology in the three most common types of equipment you will come across. The three types of equipment are Cisco switches, routers, and Servers or Workstations running the Windows operating system. The only prerequisite for this article is a basic working knowledge of Switches, Routers, and PCs running Windows for their respective sections. Click here for a network diagram of the lab environment created in this article. Note that the examples I use are on based on the 802.1q standard.

Cisco switch configurations
Cisco switches primarily come in two flavors, CatOS (Catalyst OS) and IOS (Internetworking OS). Although Cisco is trying to migrate almost everything to the IOS type operating system on their equipment, there is still a large install base for CatOS switches. Cisco’s flag ship 6500 series switch can actually run CatOS or IOS, but most people I know run CatOS on the 6500s. Smaller switches like the 2950 and the 3550 all run IOS. Then there is the odd ball 2948g-L3 that really is more of a router than a switch (2948 without the “L3” is a normal IOS switch) and you should refer to the next section on routers for it’s configuration.

Note: In many ways, I personally love the CatOS over IOS for it’s UI’s (User Interface) superior method of entering system configuration. For example, if you ever need to apply a common configuration to 48 Ethernet ports on module 4, you simply need to apply a command to “4/1-48”. On the IOS UI, you would need to enter each interface for all 48 ports and apply 96 individual commands vs. one command on the CatOS! Viewing the configuration on IOS is equally bloated.

Here is a breakdown of trunking support for the various Cisco switches

IOS CatOS
2900 Series (on some IOS versions) 2980 (Same IOS image as the 4000)
2948 (Non L3) 4000 Series
2950 Series 5000 and 5500 Series
3548 6000 and 6500 Series
3550 Series
6500 running IOS

To set up the CatOS or IOS on Cisco Switches, the port that needs to trunked must be configured for the right kind of VLAN trunking (Note that not every module and interface on a switch support trunking and will give you an error message if you try to set it for trunking, this may require you look up the port capabilities for each port).

Here is the configuration guide for both IOS and CatOS.

Configuring and locking down IOS switches:

IOS Command Description
Enable Switch to enable mode
Configure Terminal Enter global configuration mode
Interface FastEthernet0/1 Entering interface configuration for port 0/1. This is where you pick the port you want to trunk.
Switchport mode trunk Set port to trunking mode.
Switchport trunk encapsulation dot1q Set trunk type to 802.1q. If your switch only supports either ISL or 802.1q, this command does not exist because there is nothing to specify. This command only works when you can choose between the two.
Switchport trunk allow vlan 10-15,20 Allow only VLANs 10 through 15 and VLAN 20. It is important that you restrict the VLANs to only the ones you need for security best practices.
Exit Exit interface
Exit Exit global configuration
Write memory Commit changes to NVRAM

Locking down CatOS for security:

CatOS Command Description
Enable Switch to enable mode
Clear trunk 1/1-2 1-1005
Clear trunk 2/1-2 1-1005
Clear trunk 3/1-24 1-1005
…fill in the pieces…
Clear trunk 12/1-24 1-1005
Set trunk 1/1-2 off
Set trunk 2/1-2 off
Set trunk 3/1-24 off
Set trunk 4/1-24 off
…fill in the pieces…
Set trunk 9/1-24 off
This is an example of how to lock down a Cisco 6500 switch. First it clears VLANs from all ports on a 6500 switch, and then it explicitly disables trunking from every single port. Whether you intend to use trunking on your CatOS switch or not, you would be very wise to implement this lock down on all of your CatOS switches. Otherwise, a hacker can bypass all Layer 3 (firewall) security by simply hopping VLANs. I included this section before the “Configuring CatOS” section because the lockdown needs to be done before any custom configuration is entered.

Although this section is not really mandatory for trunking to function, I felt irresponsible not to include this layer 2 security lockdown procedure. Although the CatOS switch has a far more streamlined UI compared to the IOS switches, it is notoriously promiscuous with it’s default settings on VLAN trunking. The trunking auto-negotiation is equally alarming on both the IOS and CatOS switches, which if left default will automatically connect switches as fully enabled and wide open. You would be shocked to see the sloppy Layer 2 security on most networks. If left unchecked, you are not only opened to malicious hacks, but someone could accidentally plug in a Cisco switch with a VTP engine and accidentally nuke your network by changing your VLAN configuration.

Configuring CatOS switches:

CatOS Command Description
Enable Switch to enable mode
Set trunk 1/1 on dot1q 10-15,20 The “on” switch enables trunking on this port. “Dot1q” sets the port to 802.1q mode. “10-15,20” enables VLAN 10-15 and 20 to be supported on this trunking interface.

You may find it funny that it was so much work to lock down your switch while it only took one command to enable trunking. If you didn’t bother to follow the lockdown procedure shown above, specifying the “10-15, 20” VLAN IDs are useless because it simply adds them to the existing 1-1005 pool which remains wide open. This behavior of the CatOS is very annoying and insecure by default. The IOS switches on the other hand only permit the VLANs you enter last, which also has it’s user friendliness downside. On an IOS switch, if you enter “10-15,20” with your “allow VLAN” statement, it nullifies any other allowed VLAN out side of 10-15 and 20. The big plus to this is default security.

Cisco router configurations
Cisco router configuration for trunking is fundamentally different from Cisco Switch configuration. A router encapsulates traffic to be carried on the switch infrastructure and behaves as a multi-home node on the network just like a Server, Workstation, or Firewall. A switch performs as the infrastructure to carry traffic for VLANs (for those that are allowed) on the Layer 2 infrastructure as the VLAN traffic director where as the router performs a higher layer function as a network gateway that can route Layer 3 traffic. You can basically configure a router with number of desired virtual interfaces (AKA sub-interface) from a single interface and designate the VLAN you want those interfaces to be switched to. The switch determines where the traffic from that router’s virtual interface will wind up based on the VLAN ID portion of the 802.1q tag that was inserted in to the Ethernet frame header by the router.

Configuring Cisco Routers:

IOS Command Description
Enable Switch to enable mode
Configure terminal Switch to global configuration mode
Interface FastEthernet0/0.1 Creates first sub-interface for FastEthernet0/0
Encapsulation dot1q 10 Injects 802.1q tag with VLAN ID 10 into every frame coming from first sub-interface.
IP address 10.1.1.1 255.255.255.0 Defines IP/mask for this first sub-interface
Exit Exits first sub-interface
Interface FastEthernet0/0.2 Creates second sub-interface for FastEthernet0/0
Encapsulation dot1q 11 Injects 802.1q tag with VLAN ID 11 into every frame coming from second sub-interface.
IP address 10.1.2.1 255.255.255.0 Defines IP/mask for this second sub-interface
Exit Exits second sub-interface
Exit Exit global config
Write memory Commits changes to NVRAM

You can continue to add any number of sub-interfaces you need. Once FastEthernet0/0 is connected to a switched port configured for 802.1q trunking as shown in the above switch examples, all the sub-interfaces of FastEthernet0/0 becomes a routable node (can be default gateway) on the subnets that correspond to their VLAN.

Windows configuration with Intel Pro Series adapters
Conceptually, trunking a Windows workstation or server to a switch is the same a trunking a router to a switch. The only difference is the procedure, and a much easier one I might add. The ubiquitous Intel Pro Series adapters provide a simple to use graphical tool called PROSet that any one can learn within a minute or even someone who is just winging it. Note that the same Intel adapters with the ANS drivers can provide similar capabilities on Linux. You can get more information on Linux here from Intel.

To get started, simply invoke the Intel PROSet or PROSet II utility (assuming PROSet is installed). This can be done by simply double clicking the PROSet icon in the system tray on the lower right hand corner of the desktop. The following utility should come up.

Next we must add a VLAN interface. Simply right click on Intel adapter with the PCI Card icon and click “Add VLAN”. Note in the following screen capture, the virtual interface for VLAN 100 is already there and we are adding an additional one.

The “Add New VLAN” window comes up. Enter the VLAN ID you want this interface to trunk in to in the ID field, then give it a name that describes the VLAN function. In this case, we will be adding VLAN 69 labeled the Wireless LAB.

Once this is completed and you click “OK”, simply click “Apply” and “OK” on the PROSet window to commit the changes and get out of the PROSet utility. The next step is to configure the virtual interfaces. Simply open up the “Network Connections” window and begin configuring the virtual interface as you would any other physical interface. Note that the interface names already correspond to the names of the VLAN interfaces you added. However, auto-naming only works in Windows XP. Windows 2000 just gives them generic names, so you must add one interface at a time and rename the interface under “Network Connections” before you add another VLAN interface. If you don’t do that, it is impossible to tell which Interface goes to which VLAN without some tedious trial and error. One other very important thing to note, the physical interface it self “Local Area Connection” is not bound to anything except for the “Intel Advance Network Services Protocol”. It is not used for anything else and only serves as a host for all of the virtual interfaces and it does not have it’s own IP address or VLAN.

Just remember that only your primary interface is registered with internal Dynamic DNS and WINS and is the only interface that can have a default gateway. This is the same as when you have multiple physical network interfaces. In both cases whether there are multiple physical or virtual interfaces, you must set manual routes to take advantage of the other non-primary interfaces. This is why in the TCP/IP configuration window above, I deliberately left the Default gateway and DNS settings blank because those settings went on to the VLAN 100 interface. If you put a default gateway on the VLAN 69 interface, it will take over and the default gateway for the VLAN 100 interface will disappear. All the default gateway means is the route for 0.0.0.0 network with mask 0.0.0.0 (which really just means any IP destination) will route to the default gateway. You can easily tell this with the “route print” command.

From this point on, you may add as many VLANs as you need using the example above. The only other thing you should be aware of when dealing with these VLAN Interfaces is that you should not “Disable or Enable” them from the “Network Connections” folder, and instead you should deal with the Interface from the PROSet tool. Doing so will cause you to encounter some strange behaviors.

An introduction to VLAN Trunking

May 5th, 2003 25 comments

Contents

  • Introduction
  • Applications of VLAN Trunking
  • VLAN encapsulation types
  • Trunking requirements

Introduction:
There are many Network Devices in the Data Center that require multi-homing (multiple network adapters) to tie in to multiple network segments.  As the number of those systems increase, it becomes more and more difficult to provide the network infrastructure (due to the sheer number of Ethernet connections that need to be provided) from the perspective of cost, space, and wire management.  A technology called VLAN (Virtual LAN broadcast domains logically segmented on an Ethernet switch) trunking that was once primarily the domain of network switches has now trickled down to the rest of the Data Center to address these issues.  Now it is possible for these multi-homing devices to be multi-homing in function without the need for multiple physical network adapters and the additional infrastructure associated with them.  VLAN trunking allows a single network adapter to behave as “n” number of virtual network adapters, where ”n” has a theoretical upper limit of 4096 but is typically limited to 1000 VLAN network segments.  In the case where a single gigabit Ethernet adapter is trunked in place of using multiple FastEthernet adapters, higher performance at a lower cost while increasing flexibility can be achieved.  This really is the best of all worlds.  In this article, I will give you an overview of VLAN trunking, how it works what it is used for.

Applications of VLAN Trunking:
Here are some common examples of Network Devices that benefit from VLAN trunking:

  • Routers
  • Firewalls (software or hardware)
  • Transparent proxy servers
  • VMWare hosts
  • Wireless Access Points

Routers can become infinitely more useful once they are trunked in to the enterprise switch infrastructure.  Once trunked, they become omnipresent and can provide routing services to any subnet in any corner of the enterprise network.  This is in essence what a routing module in a high-end core or distribution L3 (Layer 3) switch provides.  This technique can be a poor man’s substitute for a high-end routing module on a switch, or it can complement the high-end L3 switch by providing additional isolated routed zones for test labs, guest networks, and any other network segment that requires isolation.

Firewalls are another device that can greatly benefit from VLAN trunking now that all the big players like Cisco, Nokia/CheckPoint, and NetScreen support it.  In today’s high stakes environment where security concerns are ever increasing, the more firewall zones (subnets connected by separate virtual or physical network adapters) a firewall provides the better.  With the exception of NetScreen firewalls, firewalls can only block potentially hazardous traffic between zones and not traffic within the same zone.  Therefore, the more you separate devices like routers and servers by logical function and security level, the better off you are since you can limit unnecessary traffic and mitigate many security threats.  Since VLAN trunking provides a nearly unlimited number of virtual network connections at a lower cost and higher performance, it is the perfect addition to firewalls.  You can read more on this in:

Understand how to design a secure firewall policy

Increase firewall protection with a better network topology

Transparent proxy servers such as a Windows server running Microsoft ISA or a Linux server running Squid can now be built with a single gigabit Ethernet adapter costing as little as $40.  A traditional proxy server can be built with a single network connection, but a transparent proxy server usually cannot.  Since transparent proxy servers can be implemented with zero client deployment or SOCKS compliance; they are an extremely attractive new technology.  Trunking just makes it that much simpler and cheaper to implement.

VMWare hosts are servers that host multiple virtual servers for the purpose of server virtualization or system modeling for laboratory testing and research.  Although VMWare already provides the ability to have multiple VLANs within the VMWare host, it’s ability to connect those VLANs to physical VLANs is limited to the number of network adapters on the VMWare host.  A VMWare host can provide up to 3 network connections to each virtual machine.  Since applications cannot tell the difference between a virtual adapter and a physical one, a VMWare host armed with a trunked interface is significantly more flexible and simpler to manage.

One of the hottest new applications of VLAN trunking is wireless networking.  The new Cisco AP 1200 for example can behave as 16 virtual Wireless LAN infrastructures.  Some VLANs can be used for low security guest Internet access, others for minimum security enterprise users, and administrators can be put on a high security VLAN with enhanced firewall permissions.  All this can be achieved using a single Wi-Fi infrastructure to emulate up to 16 Wi-Fi infrastructures.  The Cisco AP 1200 does this by assigning each of the 16 VLANs it’s own Wi-Fi SSID, so when you look at it from NetSumbler (free wireless sniffer), you will think you are looking at up to 16 different wireless networks.  Those 16 VLANs are then trunked over the AP 1200’s FastEthernet port.  This offers wireless nirvana in Wireless LAN capabilities.

VLAN encapsulation types:

There are several types of VLAN encapsulation.  The two most common types are Cisco’s proprietary ISL (Inter Switch Link) and the IEEE 802.1q specification.  ISL is an older standard that Cisco was using to connect it’s switches and routers, but now that 802.1q is ratified, all of the newer Cisco gear either support both ISL and 802.1q or only 802.1q.  Older Cisco equipment may only support ISL trunking, so you must look up the individual specifications of your gear before attempting to connect them.

The 802.1q standard works by injecting a 32 bit VLAN tag into the Ethernet frame of all network traffic in which 12 of those bits define the VLAN ID.  The VLAN ID simply declares what VLAN the Ethernet frame belongs to, and the switch uses that ID to sort out and place the frames in their proper VLANs.  Once a frame reaches the end of the line or hits a non-trunked port, the VLAN tag is stripped from the frame because it no longer needs it.  This also means that if you attempt to trunk a host to a non-trunked port, it obviously will not work because that non-trunked port will strip the VLAN tags upon entry.  Note that there are very serious security implications of using VLAN technology; I will elaborate on that in a future article on VLAN Layer 2 security.

Given that a VLAN tag must be inserted into each and every Ethernet frame, it does mean that there is a little overhead in terms of slightly increased frame sizes and some CPU over head required to inject the tags.  Because of this, separate physical network adapters will always perform better than virtual network adapters on a single adapter of the same speed.  But remember, this performance deficiency is quickly reversed if a single gigabit Ethernet adapter is used in place of multiple FastEthernet adapters.  Given all the rewards of VLAN trunking, the small overhead is more than justified.

Trunking requirements:
VLAN Trunking requires that the network switch, the network adapter, and the drivers for the operating system all support VLAN tagging in order for them to trunk.  Almost any enterprise grade switch made by Cisco, Extreme, Foundry, and others support 802.1q.  A few examples of this on the smaller scale are the Cisco’s 2950 series and Netgear’s FSM726.   Most high end client adapters support VLAN trunking, but one of the most common ones you will find is the Intel Pro/100 and Pro/1000 adapters because it is included on almost every server manufacture’s motherboard.  For those without an integrated Intel adapter, a separate Pro/1000 PCI card can be bought for as little $40.  Drivers support on the Intel adapters are excellent and covers almost everything from BSD to Linux to Windows client and server operating systems.  My follow up article on how to actually implement VLAN trunking will focus on Cisco and Intel equipment.

Categories: Cisco, Microsoft, Networking, Security, Servers Tags:

Building a Redundant and Manageable DHCP infrastructure

April 17th, 2003 2 comments

Table of contents

  • Introduction
  • Designing your TCP/IP network
    • Performance considerations and the proper sizing of Subnets
    • Designing a clean “Binary nice” subnet site
    • Routing of Subnets with Layer 3 switches
  • DHCP Relay
    • Using an NT or Win2k DHCP relay server in each subnet
    • Using a Layer 3 switch to relay DHCP for all VLANs
  • DHCP Redundancy and Configuration
    • Setting up two non-overlapping DHCP servers for maximum availability
    • Building DHCP scopes and setting the scope and server options
  • Using MAC address “security” on DHCP
  • Advanced Switches with 802.1x Port Based Access Control and EAP
  • Conclusion

Introduction

This document is meant as an introduction and overview on how to build a redundant and easy to manage DHCP infrastructure with modern technology.  DHCP is a critical service that needs to be thoroughly integrated in to a good network design for a practical and functional network infrastructure.  Because it is impossible to talk about DHCP without talking about network infrastructure, I will start off by covering some basic TCP/IP network design.  Although there is a prerequisite for a basic understanding of TCP/IP networking concepts and the Cisco layer 3 switching configuration to fully comprehend all of the material, you can still read it on a high level to get a good basic understanding of this technology.  Doing so will help you work better with professional networking consultants.

Designing your TCP/IP network

Performance and Sizing of Subnets:

In order to design a high performance and low congestion network, we must understand what the enemies of network performance are.  The biggest enemy of network performance in the past was data collisions with the use of Ethernet Repeaters (AKA Hubs).  Any time any data is transmitted by one computer to another, the data is repeated to every single port of the Hub which causes congestion for every one.  In today’s network, this is a thing of the past because Ethernet switches have reached such a high economy of scale and are so affordable that it would almost be silly to continue to purchase Ethernet repeater technology.  Data collisions have all but become moot on modern Ethernet networks.  Ethernet Switches isolate traffic between two computers while keeping all other ports clear and open for all the other computers on the switch.  Because of this, the new king of congestion is the broadcast storm.  Computers (especially the ones running NetBEUI) have a nasty habit of calling out or announcing to the entire TCP/IP subnet forcing the Ethernet Switch that normally likes to keep traffic isolated to send that data stream to every port on the switch on the same subnet.  Even worst, sometimes every computer on that subnet has to respond to the sender causing the original broadcast to be amplified a thousand times.  Unfortunately, this sometimes puts us back into the same predicament that Ethernet Hubs had to constantly deal with.  The only way to combat this is to keep the number of hosts on a single broadcast domain to a minimum.  That means probably no more than 128 computers on a single TCP/IP subnet.  I have seen sites with thousands of computers on a single subnet and I can tell you it is not pretty when monitoring the broadcast storms.  In fact, it was so bad that it was enough to kick people out of their terminal server sessions a dozen times a day because of network instability!

Designing a clean “Binary nice” subnet site:

We will start with the premise that we have single LAN site on a single campus.  While it is possible to run DHCP over Wide Area Networks, it is not considered best practice so we will stick to a single LAN in this paper.  The site will have up to 1000 users with 1000 computers broken down in to 256-host sized VLANs (Virtual Local Area Networks created by logically segmenting a network with a managed layer 2 or layer 3 switch) with no more than 100 users per VLAN with room to spare.  This means we will require a minimum of 10 VLANs on this site.  Additionally, because we want to be able to summarize this site in to a single supernet when routing, we will round up to the next “nice” binary number 16.  We will use the private class A scheme of 10.x.x.x for our company, so for this site, we will run the entire site under the network ID of 10.0.0.0/20.  For those of you new to this terminology, this is the abbreviated terminology for the Network ID of 10.0.0.0 with subnet mask of 255.255.240.0 which defines all IP addresses ranging from 10.0.0.0 to 10.0.15.255.  By using “binary nice” numbers like 2, 4, 8, 16, 32, and so on, I am able to define the entire subnet by the single network ID of 10.0.0.0/20.  The reason for this is not solely ascetic, it greatly simplifies routing and security rules because I can define the entire network with a single statement.  This not only simplifies management, but also improves performance and reduces the chance of mistakes.  Some of you at this point may be balking at the idea of running 10 separate subnets for “only” 1000 users, but bear with me, it is not that difficult to handle if you use the right technology.  Also keep in mind that there are 65,536 256-host sized subnets in the 10.0.0.0/8 class A private network.  This means that you can have 4096 of these sites with 16 subnets each.  Obviously, the next campus LANs of similar size will be defined as 10.0.16.0/20, 10.0.32.0/20, 10.0.48.0/20, and so on.

Routing of Subnets with Layer 3 switches

Now that we have the basic network laid out, we must build it.  The best way to handle this is with a managed Ethernet Layer 3 switch such as a Cisco Catalyst 6500 series with MSFC but a Cisco 3550-12G can be used instead for smaller networks or tighter budgets (Note that Cisco is not the only company than can fill this job, but for the purposes of this paper, I will use the Cisco example.  Additionally, the 3550-12G makes for a great poor man’s core/distribution layer switch at 1/10th the cost).  Both of these switches can act as the core, or core and distribution layers of the network.  Then we can proceed to connect Access layer switches such as the Cisco 2980 switches (you can use cheaper unmanaged switches for this too but understand that you can’t break them up into additional VLANs or have trunking support) to the 6500 via gigabit Ethernet uplinks.  Then distribute these access layer switches around the campus so that the actual Cat5e or Cat6 copper runs to the clients are kept to a minimum length vastly reducing cabling cost in material and labor while increasing signal reliability.  Once this 2 or 3 tier design is in place, we can proceed to configuring the switches.  The Cisco 2980 access layer switch has VLAN or Bridge Group capabilities, but has no routing capabilities of it’s own.  For that, it can connect or trunk into the core switch using 802.1q trunking over the gigabit uplink via Cat6 copper or full duplex Fiber.  The core switch using the 6500 MSFC or the 3550-12G can act as a massive VLAN router to handle all routing requests and act as the default gateway for every VLAN on all tiers by configuring a single static routing table and/or protocol such as EIGRP, RIP, or OSPF.  Additionally, it can also act as the DHCP relay agent for all the VLANs as well and is definitely easier and cheaper than setting up at least 10 separate Windows or Linux boxes to act as DHCP relay agents.

Example with six VLANs using 6 2980 L2 switches and a 3550 12G as core/distribution layer switch:

DHCP relay:
A DHCP relay agent sits in place of an actual DHCP server in a TCP/IP subnet.  It basically extends the reach of the DHCP server without the need for multiple DHCP servers on each subnet by acting as the DHCP server’s helper agent in a remote subnet.  DHCP relay does not manage IP addresses itself, but relays the DHCP request to the DHCP server on behalf of the client, obtains the IP address, and then hands out the IP addresses to the asking client on behalf of the DHCP server.  The only other option is a single DHCP server with multiple Ethernet ports sitting on each VLAN but that has some serious limitations in scalability.  On Cisco Layer 3 switches, DHCP relay can easily be achieved with a single command of ip helper-address 10.0.14.255 entered in to each VLAN interface as shown below.  10.0.14.255 will be the broadcast address of the VLAN that will home my DHCP servers.  You can use a specific IP address here instead of a broadcast address, but that would mean only having one active DHCP server or you must cluster two or more DHCP servers on a single IP address.  For our example, the following are configuration examples with VLAN definitions (AKA Bridge Group), default gateways, and DHCP relay configurations for Cisco or IEEE standard configurations.

IEEE standard configuration on a Cisco 2948-L3 switch used as a Core/Distribution layer switch:

Bridge group declarations

bridge 1 protocol ieee

bridge 1 route ip

bridge 2 protocol ieee

bridge 2 route ip

…all the way through…

bridge 15 protocol ieee

bridge 15 route ip

Declares VLAN 1

Enables routing in VLAN 1

Declares VLAN 2

Enables routing in VLAN 2

… Declare VLANs 3 – 14 your self …

Declares VLAN 15

Enables routing in VLAN 15

Interface configurations

interface BVI1

ip address 10.0.1.1 255.255.255.0

ip helper-address 10.0.14.255

no ip directed-broadcast

interface BVI2

ip address 10.0.2.1 255.255.255.0

ip helper-address 10.0.14.255

no ip directed-broadcast

…all the way through…

interface BVI15

ip address 10.0.15.1 255.255.255.0

ip helper-address 10.0.14.255

no ip directed-broadcast

Defines VLAN 1

Sets the default gateway listener for VLAN 1 as 10.0.1.1

Sets the DHCP relay agent to forward to 10.0.14.255

Defines VLAN 2

Sets the default gateway listener for VLAN 2 as 10.0.2.1

Sets the DHCP relay agent to forward to 10.0.14.255

… Fill in VLANs 3 – 14 your self …

Defines VLAN 15

Sets the default gateway listener for VLAN 15 as 10.0.15.1

Sets the DHCP relay agent to forward to 10.0.14.255

Port configurations

interface FastEthernet1

no ip address

no ip directed-broadcast

bridge-group 1

interface FastEthernet2

no ip address

no ip directed-broadcast

bridge-group 2

…all the way through…
interface FastEthernet15

no ip address

no ip directed-broadcast

bridge-group 15

Defines Fast Ethernet Interface 1
Binds Interface 1 to VLAN 1
Defines Fast Ethernet Interface 2
Binds Interface 2 to VLAN 2

… Fill in Interfaces 3 – 14 your self …

Defines Fast Ethernet Interface 15

Binds Interface 15 to VLAN 15

(Note that VLAN 14 is the only bridge group interface that does not need the helper-address because it contains the DHCP server it self.  Also note that VLAN 11 through 15 will be used for spare, DMZ, or server farms.  I didn’t have a 3550-12G with Gigabit Ethernet handy so I used a 2948-L3 with Fast Ethernet instead for this example as the core switch, but it is the same ideal.)

Additionally, some of the Cisco L3 switches use a different type of command line interface.  The following is an example with a Cisco 6509 MSFC L3 module:

interface Vlan1

description Subnet 1

ip address 10.0.1.1 255.255.255.0

ip helper-address 10.0.14.255

no ip redirects

no ip directed-broadcast

This looks quite a bit different than the 2948-L3, but still uses the same DHCP relay command.  The VLAN command accomplishes the same thing as the BVI command, but it is a little easier with the 6509 type CLI (command line interface) because you don’t need to declare the IEEE bridge protocol.  The Cisco 2948-L3 CLI must manage the routing as well as the switching and port configurations.  The 6509 MSFC module is more of a dedicated routing and management module with the physical switch ports handled by a separate CLI.  You can consult your switch manual or Cisco’s web site for more information on your particular hardware.

While it is possible to use a Windows or Linux server as a DHCP relay agent, it would seen to be over kill to dedicate 15 or more separate machines to do the job of a single command on your Layer 3 switch.  Note that without this technique of using the L3 Switch, it would be extremely impractical to implement this degree of TCP/IP segmentation on a LAN.  You would also need 15 separate servers for DHCP relay agents and 15 traditional routers to join the 15 VLANs all of which would be absurd.  The point is, take the easy route and use a Layer 3 switch at the heart of your network.  It opens up all sorts of possibilities.

DHCP redundancy and configuration:

Setting up two non-overlapping DHCP servers for maximum availability
As I mentioned earlier, the DHCP servers will reside in the subnet of 10.0.14.0/24 along with many of your other servers.  Since DHCP is an extremely low activity service, my recommendation for this is that you may host the DHCP servers on your Windows NT or Windows 2000 Domain controllers or File servers along with other services like DNS, WINS, and other common services.  You only need to find two servers for a home.  Once you do, simply install the DHCP service and proceed to configure each server to serve only half the subnet with non-overlapping scopes (it is also good to cluster your DHCP servers, but that requires windows advanced server which may not be an option for everyone).  The first DHCP server will be configured with a scope of host numbers 10-109, and the second DHCP server will host 110 to 219.  This leaves hosts 1-10 reserved and 220-254 for static IP addresses for things like printers.  This is what is called a 50/50 configuration and you may also hear recommendations for an 80/20 configuration where IP addresses are a bit scarcer.  In this architecture how ever, we are leaving so much breathing room that only 50% of the total subnet is more than enough for all DHCP clients.  I also recommend not using DHCP reservations because this makes the management of DHCP servers extremely messy by fragmenting the scopes.  I would much rather assign people addresses in the 220-254 range (make this range as large as you need) rather than letting other system administrators or users pick their favorite number.  Because the DHCP relay agent is forwarding to the broadcast address where these two DHCP servers reside, it is basically a first respond first serve environment.  But it doesn’t matter since all of our users can fit in a single DHCP server with tons of room to spare.  Statistically if the two computers have equal load and are equal in speed, users will end up half and half on each DHCP server.

Building DHCP scopes and setting the scope and server options

For our example, we will use Windows 2000.  On these DHCP servers, you will need to create 10 new scopes using the create scope wizard.  During the creation of these scopes, simply name them VLAN1 through VLAN10 and enter the corresponding IP ranges.  Be sure to only enter the default gateway for each scope and don’t enter any other DHCP options.  This is explained by the differentiation between scope options and global (server) options and I often see people confuse the two.  It is possible to put any type of DHCP attributes like default gateway, DNS server, WINS, and such on either Scope or Server options, but there is only one proper way to do it.  The default gateway should always be put under scope options as you have already done during the creation of the scopes, all other standard attributes like WINS and DNS should be placed under server options (formerly known as global options under NT4).  Then the server options will automatically be inherited into all of the scopes saving you a lot of manual entry and possibility for errors.  To configure the server options, simply right click on server options and hit “configure options” to get the following window:

Set 006 for your DNS servers

Set 015 for your default domain suffix

Set 044 for your WINS servers

Set 046 for 0×8 for your WINS/NBT Node type

Now imagine doing this 10 times for each scope, it would be silly.  Putting these additional settings under Server Options is the best way to go.  Then repeat this procedure on the second DHCP server doing everything the same.  The only difference is that the host range will be 110-219 instead of 10-109 on the first DHCP server.  Some of you astute readers at this point may be wondering how to actually bind all the different scopes to their respective network IDs.  The answer is surprisingly simple, nothing!  When you created the scopes, you had to define the separate IP ranges of all the corresponding scopes it should operate in.  That alone is enough configuration to match up the scopes with the subnets they will serve.  When the DHCP server receives the DHCP forwarded request from the DHCP relay agent (or IP Helper), it simply examines the source IP of the DHCP relay agent that forwarded the request, then matches it up to the scope that serves the subnet of the DHCP relay agent and grants an IP-configuration-set back to the relay agent.  Then that IP-configuration-set is passed on by the DHCP relay agent to the original client that made the DHCP request in the first place.

Then finally after all that, be sure you “activate” your DHCP servers and authorize them by right clicking on the DHCP server and choosing “authorize”.  Once authorized and activated, you have just set up a two DHCP servers to serve 10 separate subnets with the aid of a single layer 3 switch.  Note that this type of infrastructure is extremely scalable and could just as easily serve 1000 scopes if needed.  A DHCP server only has to do one transaction per user per week so even 1000 scopes is not a lot of work for a slow 486 computer.

Be aware that this type of architecture absolutely mandates a good DNS and WINS infrastructure.  You cannot rely on the old broadcast discovery techniques like you could under a flat subnet where everyone lived.  But that is a great performance advantage and puts less reliance on luck when using broadcast and prey for a response.  But rest assure that having a disciplined TCP/IP name resolution infrastructure will pay great dividends when all the inconsistencies and mysteries of legacy style Windows networking disappears.

Using MAC address “security” on DHCP:

You can set up casual “security” for your network by only issuing IP addresses with pre-reserved MAC addresses.  The reason I say that sarcastically is because it is security through obscurity.  It can only be used for casual security because it is based on the honor system.  MAC addresses can be changed on any network adapter within seconds.  Your MAC address is what you declare it to be.  This is the same reason why MAC addresses can’t really secure wireless networks because it is so easy to forge.  The other problem with this “Security” scheme is that even if you don’t assign some one an IP address, that doesn’t mean they can’t just simply type in an IP address manually and still participate on your network.  Additionally, maintaining a 12 digit hex number gets to be quite cumbersome for a thousand users.  This technique keeps the non technical person out, but it has no security capabilities beyond that.  Real security needs to be handled at the switch level with 802.1x and EAP.

Advanced Switches with 802.1x Port Based Access Control and EAP:

Some advanced switches like the Cisco Catalyst 6500 supports 802.1x port based access control and extensible authentication protocol.  Basically, this means no authentication no access.  The Ethernet port remains closed until you authenticated successfully over EAP.  Unlike the previous method discussed using MAC reservations on the DHCP server, you can’t just forge the MAC address or even manually enter an IP address.  Either hack is useless when 802.1x/EAP is employed on the access switch.  When 802.1x is employed, a client connecting to a port on the switch must support the 802.1x protocol.  Currently, Windows XP is the only operating system that natively supports 802.1x but Microsoft is promising 802.1x support for legacy operating systems like 98, NT, and 2000 by the end of 2002.  Basically, when the 802.1x capable client connects to the switch, it must send EAP credentials to the switch.  The switch then forwards the EAP message to the RADIUS (Remote Authentication Dial-In User Service) server.  If the RADIUS server accepts the credentials, it will respond with an EAP success message to the Switch.  Only then will the Switch transition the port to an open state and then permit DHCP requests and full network participation.  Additionally, this same RADIUS infrastructure can be used to provide enterprise grade wireless security.

For more information on 802.1x and Cisco Switches, see this Cisco configuration guide for port based authentication.

Conclusion:

All the old concepts and ideas on hubs, switches, routers, and DHCP servers have been revolutionized by this new approach.  Not only are we able to create a more manageable and robust DHCP and Network infrastructure, we are able to do it with less money, equipment, and time.  It is simply a matter of taking advantage of what new technology has to offer.

Categories: Cisco, Microsoft, Networking Tags: