Welcome Justin James to ForMortals.com!

I’d like to personally welcome Justin James and his new Critical Thinking blog to this site.  I’ve known Justin for more than 2 years now professionally and personally and I can attest that his blog is aptly named.  Justin is very knowledgeable on real world IT and cuts through the typical baloney you read in the main stream IT media.  Even if you or I don’t agree with him, you can appreciate his honesty and thoughtfulness.  Justin is also more than happy to debate you.  So, please join me in welcoming Justin James to ForMortals.com

Nice to be here!

Justin JamesHello to all! By way of introduction, my name is Justin James, and I have been writing over at TechRepublic for some time now. Over there, they like it if I keep my posts strictly related to programming and development, but the bulk of my days are spent with things that are distinctly not that… things like systems administration, network engineering, and so on. I was looking at my options for expanding my writing a bit, like it was originally at TechRepublic (when it was my personal blog, instead of me being one writer under “Programming and Development”) when George started this up. I jumped at the chance to expand my topics a bit, and I hope you enjoy the posts!


Comments on inaccurate testimony at the FCC Stanford hearing

Update: How Comcast customers can seed 100 times faster and bypass TCP resetsThe FCC hearing at Stanford University on April 17th 2008 was filled with inaccurate testimony from various witnesses.  Since that testimony seems to be carrying significant weight both on Capitol Hill and in the media, I feel compelled to set the record straight.  I have filed a copy of this letter on FCC docket 07-52.

Problems with Jon Peha’s testimony
Jon Peha testified that BitTorrent was like a telephone and implied that if a TCP reset is used by Comcast to stop a TCP stream, then that constituted a blockage of BitTorrent.  Furthermore, Professor Peha implied through his telephone analogy that if BitTorrent is blocked, then the user must manually redial to reestablish the connection.  These assertions are highly inaccurate and here’s why.

The first problem is that Jon Peha did not understand the multi-stream aspect of BitTorrent or P2P.  Peha seemed very surprised immediately before our panel at Stanford when I told him that a P2P download typically used 10 to 30 TCP streams at the same time.  His surprised reply to me was “all active?” and I replied yes.  The reality is that if a certain percentage of BitTorrent TCP streams are reset and temporarily blocked by an ISP, say 15% for example1, then the “Torrent” (the file that’s being exchanged amongst multiple peers over the BitTorrent protocol) is essentially slowed down by an average of 15%.  In other words, the “Torrent” would suffer a 15% partial blockage which is accurately described as a “delay” since the file transfer didn’t actually stop.  This would be like filling up a bath tub with 20 faucets and you closed 3 of those faucets.  The rate of water flowing in to the tub would slow but not stop.

The second problem with Jon Peha’s testimony is his implication that the user must take some sort of action to resume the BitTorrent connection or else the connection wouldn’t resume.  Peha’s assertion can easily be proven false by a simple experiment with BitTorrent.  One can easily confirm that BitTorrent will always resume a lost connection within a matter of seconds without any user intervention just by physically disconnecting a network cable on the test machine and reconnecting it.  Not only does BitTorrent automatically resume, it picks up where it left off and does not need to start all over again.  So even if all TCP streams in a Torrent were simultaneously blocked for a short period of time, it will quickly resume by itself and eventually finish the file transfer.  Therefore this is by definition a “delay” and not a “blockage”.

This is not to say that Comcast’s existing form of network management is without problem because it is clear that the system has flaws and unintended consequences like the accidental blockage of IBM Lotus Notes.  The use of TCP resets also have a more drastic effect on rare Torrents, which are BitTorrent files that are not popular and have few seeders or other peers who have parts of the file to download from.  These rare Torrents aren’t healthy to begin with and a TCP reset can in some cases trigger a complete temporary blockage.  The rare Torrent will still get there eventually but it will suffer significantly more than a normal Torrent that is resilient to partial blockage.

It should be noted that BitTorrent in general is not an appropriate file transfer protocol for rare Torrents.  BitTorrent tracker sites tend to rank and sort a Torrent based on the Torrent’s “health” which is based on the number of available seeders and pre-seed peers.  Users generally tend to avoid the “unhealthy” Torrents on the bottom of the list.  Since Comcast offers a vastly superior alternative where they provide 1 gigabyte of web storage space, Comcast customers can use that service to distribute files 10 to 20 times faster than any single Comcast BitTorrent seeder could ever provide.  To further illustrate this point, Richard Bennett

Problems with Robert Topolski’s testimony
Robert Topolski also had problems in his testimony.  Topolski, a software tester who does not work in the networking field, insists that the TCP reset mechanism isn’t common in network devices and declared that I was wrong in my testimony.  In my experience as a Network Engineer who designed and built networks for Fortune 100 companies, it is my experience that the TCP reset mechanism is common in routers and firewalls.  For many years, Internet service providers, including LARIAT (owned and operated by Brett Glass, who has filed comments in this docket) have used TCP RST packets to protect the privacy of dialup Internet users. When a dialup user’s call is over, RST packets are sent to terminate any remaining connections. This prevents a subsequent caller from receiving information — some of which might be confidential — that was intended for the caller who disconnected. Thus, the transmission of RST packets by a device other than the one(s) which established a connection — for the purpose of informing the endpoints that the connection has been terminated — is not only commonplace but salutary.  Network architect Richard Bennett who works for a Router maker explained to me that TCP resets are the standard mechanism used by consumer routers to deal with NAT table overflow, which itself is typically caused by excessive P2P connections.  Are we to believe a single software tester or three networking experts?

The second key problem with Topolski’s testimony is that to my knowledge, he has never provided any forensic data from Comcast in the form of packet captures that can be independently analyzed.  Even if Topolski did produce packet captures and we assumed that those packet captures are authentic, one man’s packet captures wouldn’t be a large enough sample to draw any conclusions by any legal or scientific standards.  The Vuze data may constitute a large enough sample but the data isn’t very granular because it doesn’t tell us what percentage of reset TCP sessions are due to an ISP versus other possible sources

Furthermore, even if a TCP reset was used by Comcast at 1:45AM, we cannot assume that there was no spike in congestion at 1:45AM.  As I have indicated in the past, just 26 fulltime BitTorrent seeders in a neighborhood of 200 to 400 users can consume all of the available upstream capacity in a DOCSIS 1.1 Cable Broadband network.  That means less than 10% of the population seeding all day and night can cause congestion at any time of the day.  Based on what little evidence presented by Robb Topolski, no conclusions can be drawn regarding the question of whether Comcast uses TCP resets for purposes other than congestion management.

1. The reason I use 15% as the example is because the Vuze data gathered via thousands of user’s computers indicated that a Comcast broadband network typically suffered 14% to 23% for all TCP streams during 10-minute sampling periods.  That 23% figure is not restricted to just BitTorrent or P2P traffic and even those TCP resets pertaining to BitTorrent aren’t necessarily from Comcast.  It’s quite possible that the reset actually came from the client on the other end or it may have come from a customer-premise router on either end-point trying to conserve on NAT (Network Address Translation) resources.  It is undeniable that a certain percentage the TCP reset have nothing to do with Comcast’s network management practices.  We really can’t know what percentage of those TCP resets were due to Comcast and figuring out the exact percentage isn’t trivial because there are so many factors to consider.Note: Any assertions on behalf of Brett Glass and Richard Bennett that I have made in this document have been approved by Brett Glass and Richard Bennett.

Just moved to a new hosting service – MUCH faster now!

I just moved to a brand new hosting service from PowerDNN and everything looks a LOT faster and the support is much better.  Unfortunately, that meant I had to get a clean start which means I’ve lost all my data and user accounts.  If you were one of the first 23 people who signed up for an account, please accept my apologies and recreate an account.  I have manually re-entered all the blogs and the user comments.

My old service provider iHostASP.net was absolutely atrocious and my site was either excruciatingly slow or completely non-functional.  The way iHostASP.net metered my bandwidth was also dubious since they were saying I used 1 GB in 4 days with only 25 users on my site as I’m getting started.  I have no idea how those people counted but they would have forced me over my 10 GB cap.  PowerDNN costs $20 a month which is more than double but it’s unlimited (within reason) bandwidth and there’s no limits on parent portals so I think I’ll end up paying the same but with much better service.

Service provider and CMS problems remain on this site

On Tuesday and Wednesday afternoon till midnight, the service provider was having severe problems.  Hopefully the problem has been PERMANENTLY resolved or I’m finding a new provider.  I kind of hate to go to that trouble since I’d have to move all my user accounts over and all the other data in the SQL server.  It appears that this service provider is very problematic from 4PM to 1AM Pacific Standard time.  I’m looking for a new provider.

Asside from limitations like the lack of multi-category support and the lack of friendly URLs, there appears to be RSS issues with DotNetNuke.  Whenever I add a blog in the sub category of Technology and Policy, only those entries show up in RSS.

Intel’s Atom CPU flexes multithreading muscle

Intel’s new sub-2.5 watt Atom CPU (codenamed Silverthorne) is showing it can run two process threads at the same time better than competing processors.  The Atom’s hyperthreading feature (in-order SMT) allows the single-core processor to be seen as two logical processors by an operating system.  When both processors are used whenever more than one task is being performed in the computer, substantial gains in performance can be made.

According to these slides presented at IDF on page 17, the Atom is showing an whopping 39% performance gain on SPECint_rate2000 when running two process threads at a mere 17% increase in power consumption.  Hexus.net found that the Atom’s hyperthreading feature allowed it to improve CINEBENCH 9.5 performance by more than 53%.  It should be noted that SPECint_rate2000 doesn’t really stress the memory subsystem so one would expect the SPECint_rate2006 gains on Atom hyperthreading to be lower.  However it’s not really practical to expect a small mobile or embedded device to have 4 Gigabytes of RAM with high memory bandwidth.

Some like Linus Torvalds criticized these Hyperthreading results because he feels that good Hyperthreading performance could be viewed in the glass-half-empty perspective of poor single-threaded performance.  I disagree with him because having a second processor is very beneficial to a computer especially when a single process locks up.  Modern compilers also have the auto-parallelization features that will try to take advantage of the second processor for single-threaded applications.

The closest competitor for the Intel Atom is Via’s Isaiah processor.  Recent benchmarks from Eeepcnew.de seem to have indicated that Via’s C7 Isaiah processor performs approximately 39% better on raw integer performance and approximately 3.5% better on raw floating point performance.  While the Isaiah processor performs well and has reasonable power consumption for a desktop and some larger notebooks, it consumes nearly 10 times more power than the Intel Atom.  Furthermore, the results from Eeepcnew.de are for single threading so once you factor in SMT performance gains, the Atom may actually perform better than the Isaiah.

Cool open source HDR (High Dynamic Range) Imaging software

Qtpfsgui has got to be one of the coolest applications I’ve seen in a while.  It’s a free open source HDR (High Dynamic Range) imaging application that anyone can download and use for free.  It’s similar in feature to Photomatix (very nice image samples here) which is a commercial HDR imaging application.

I played with Qtpfsqui by using my RAW images to fake the input sources by manipulating the F-Stop on the RAW images.  Normally you need 3 photos with low, normal, and high exposure but I generated that from my RAW image taken back in 2006 by outputting -1.5, 0, and 1.5 F-Stops.  It’s obviously not as nice as having 3 actual photos but it’s not easy to take a still from the same person 3 times without the object moving and most of the time you only have one RAW image.  Obviously it’s also not possible to go back in time and retake the image.

So the bottom line is that you can bring out the shadows and the highlights of images at the same time and the results can look pretty stunning.

FCC hearings at Stanford

Here are the slides I presented at the FCC Network Management hearing in Stanford and here’s a letter version I sent to the FCC which gives a much fuller explanation.  While the letter is a long read (3000+ words), I really hope you take the time to read it since it gives you the full picture of what I was trying to present.  Vontv.net has the event on video though they unfortunately edited out quite a bit of stuff.

The hearings at Stanford on Thursday was probably one of the most hostile and uncivilized crowd you can expect and it was sad to see such a circus act.  I didn’t feel too good about my presentation and missed a lot of key points since I was somewhat rattled and veered off my own script in my hands.  Despite knowing what I might be in for given Richard Bennett’s experience, I still got rattled and panicked a bit when the Commissioner skipped over me and then spent valuable time to apologize while my measly 5 minute clock was ticking.  Before I could finish introducing my name, the “Raging Grannies” started to shout me down screaming “WHO PAID YOUR WAY GEORGE!” when I’ve never taken money for any political activities from anyone in my entire life.  Maybe I need to grow some thicker skin but I was bothered by the raging grannies who didn’t even know who I was and shouldn’t have had any beef with me other than the fact that someone told them that they should shout me down.

I was disappointed by the audience because I thought surely that law students and other Stanford students would be there since this was such an elite institution of higher learning.  I thought surely these kids would be interested in an FCC hearing and even if they were going to come in with a bias, at least they would try to listen to an honest debate.  But other than one Stanford Grad Student I saw and spoke to, what you had were a bunch of people who basically had their cue to cheer and their cue to boo and they already knew to shout down George Ou right off the bat.  By the end of the day during public comments, half of the people were from Poor Magazine and they all went on to vent their rage at something or anything corporate or America.  One guy indicated that he wanted to injure the economist on the second panel while another pleaded with the FCC to stop the defense department from implementing the secret Internet2 project designed to give the military control of the Internet.  Maybe my expectations were a little unrealistic and overly romanticized, but this was downright ridiculous.

Even more ridiculous was Larry Lessig’s excruciating 50 minute circus act of an “opening speech” which single handedly eliminating nearly all of our break session.  Lessig pulled all the stops including one word at a time slides running many hundred slides long and the usual misleading and inflammatory statements that “Verizon blocked text messages” when no text messages were ever blocked.  Verizon merely had a one day bureaucratic snafu in approving a 5-digit short code phone number for abortion rights group NARAL and they quickly apologized but people like Larry Lessig and Tim Wu continue to mislead the public that text messages were blocked.  Lessig (Stanford Professor), Harold Feld from Media access, Rob Topolski (Software Quality Assurance Engineer), and Barbara van Schewick (Stanford Professor) when on forever during the panel while Brett Glass and myself got cut off.

Because technology isn't just for geeks