Microsoft: A Falling Empire or a Steady Competitor?

The following is a guest post from Erin Laing from CXTEC


 

Although Microsoft’s net profit in April to June was $4.3bn, 41% higher than a year earlier when Microsoft spent $1bn fixing faulty Xbox consoles, analysts expect the next three months to hit Microsoft’s earnings in a very harsh way. In the current quarter, predicted earnings of 47 to 48 cents per share are less than the expected 49 cents, and shares have tumbled 5.89% to close at $25.86.

According to an interview with Jean Philippe Courtois, head of all Microsoft outside of North America, Microsoft has four pillars of growth production: a software sector producing operating systems such as tools like Office, a hardware sector creating entertainment devices such as Xbox consoles and peripherals, a web sector creating online ecosystems such as Live Mesh, and a server sector focusing on producing Microsoft’s Internet Information Server (IIS).

Microsoft describes itself as software and services firm. It currently is responsible for an ecosystem that encompasses hundreds of millions of PCs which use network cables or wireless routers to connect online. It makes money by charging for a license each time a new version of Windows is installed. Microsoft said its online division, which makes most of its money from online advertising, lost over $488 million last quarter. Chief financial officer Chris Liddell believes this area is seeing the direct impact from the economic slowdown. This loss may have also been due to the cost of the time spent trying to buy Yahoo and made its latest attempt via a joint offer with investor Carl Icahn last weekend. The back and forth between the two giants is very distracting for investors and employers and even advertisers according to analyst Sid Parakh.

The firm has also been dogged by repeated accusations against Vista as a failure. Mr. Courtois admits that the mistakes had been made at the launch but also pointed out that over 180 million licenses have been sold, ahead of sales for XP at the same time in that product’s life cycle.

Although Microsoft has proved to be a slump in areas such as search and advertising, they plan to be looking into areas of specialized and niche search in the future, in trying to compete with Google in the core search market. Health and education are also two very powerful markets being investigated by Microsoft, as they are areas of increased priority by most governments around the world.

S3 sleep state mystery

A while back, I put my PC on a KVM. The moment I did that, its abilty to go into S3 sleep state (I use the hybrid mode, that does the hibernate trick too) stopped working. Searching around the Web, it looked like a common problem with this KVM. No worries, I accepted it for what it is. Today, I replaced a bad hard drive in the system, and did not touch anything else. Myteriously, the S3 sleep state works again. Odd, eh?

J.Ja

Misery: Team Foundation Server Dual-Server Install

Last week I wrapped up the installation of our new Team Foundation Server 2008 setup. I had previously installed it in a single-server archtecture, but we decided to go to the dual-server configuration. Why? Because I am trying to consolidate everything by purpose into different VMs on our new app server. This means that I have 1 VM for SQL server, another one for SharePoint and other collaborative applications, and so on. When I went to install TFS into this, it was an incredible headache.

For one thing, TFS requires 32 bit Windows; since this is TFS 2008, and because so many of the 2007 and 2008 Microsoft server products (Exchange, for example) require 64 bit Windows, I think that this is going to be a real problem, especially for small shops. So now I have a 32 bit Windows 2008 VM just for TFS.

The installation took forever. I went through all of the checklists, but there was always something wrong. Oddly enough, the #1 offender was SQL Server Reporting Services. The TFS installer is supposed to configure the unconfigured SSRS install, but the default install in broken. I had to delete the encryption keys (for whatever reason) to make it work, which could bite me in the rear down the road.

I then had further problems with installing the SharePoint extensions on my 64 bit SharePoint install. Apparently, the TFS disk doesn’t ship with a 64 bit version of this, and the error message doesn’t say anything about it, just throws a dumb, useless error message. Luckily, they made a 64 bit version a few months ago.

Over all, I must say that this install was one of the most miserable installations that I ahve ever done of a Microsoft product. I know that TFS is supposed to be “enterprise class”, but many of their other “enterprise class” products are smooth sailing. I can’t see what makes TFS so special that it can’t be an easier installation. If Microsoft wants more shops to use it, they need to make it easy to deploy. No one can evaluate it in its current state, it requires too much work.

J.Ja

Inspirational and heartwarming last lecture

Like most people, I didn’t know Professor Randy Pausch or ever heard of the man, but I saw his last lecture (via Slashdot) on YouTube and I wish I had known him. If you haven’t seen it yet, I highly recommend it because it’s heartwarming and inspirational.

Professor Pausch passed away today from liver cancer. While his life was cut short, it’s clear that he’s lived a very happy and fulfilling life. I’m sure all of his friends and family will miss him greatly and my heart goes out to his family.

Seeing this lecture makes me appreciate my own life and inspires me to strive and I’m sure I’m not alone.

My next sub-notebook will likely be a Dell E Slim+

Dell will be launching a new family of Intel Atom “Netbooks” called the Dell “E”.  It’s another entry in to the cheap tiny laptop market joining companies like Asus, MSI, and Acer.  But Dell’s latest entry is actually going outside of the traditional Netbook market with a 12.1″ display which has traditionally – as if the market is that old - used 10″ or smaller displays.

I realize that some of you might think I may have made a typo in my headline to call it a sub-notebook but it was deliberate.  For me, once the display size gets in to the 10+ inch arena, it’s a sub-notebook replacement.  I realize that this probably scares the hell out of Intel and other computer makers to hear this because it cannibalizes the high-margin sub-notebook market but I’m just being honest about it. I think a lot of people who don’t have money burning a hole through their pockets probably feel the same way as I do.

For a simple office productivity computer, it’s just not going to kill me to have a 1.6 GHz Atom processor versus a high-end sub-notebook with a dual-core 1.3 GHz Core 2 Duo processor.  I realize the latter processor is probably 50 to 80 percent faster in processing power but it’s not enough of a difference to make me want to spend over $2000 on a sub-notebook when these new Netbooks cost a quarter of the money.

This had Intel so concerned that CEO Paul Otellini said that the Atom isn’t something most of us will use.  Sony executives were equally concerned when they said that this was a “race to the bottom” (of margins) and I think they were right to be concerned.  There will always be people willing to pay for the premium products because they have the money or their business has the money to spend, but I suspect a lot more people will be looking at these Netbooks as an alternative to pricy sub-notebooks.

Here are some specs on the Dell E slim and a more detailed comparison between all the Dell E models including the 8.9″ LCD versions.  Note that Dell has only stated that the entry level model will be priced at $299 so it’s anyone’s guess how much more the other models will be.  I’m hoping Dell will be aggressive and sell the highest end model in the $500 range.  Barring any horrific reviews of some serious flaw in the product, this will likely be my new “sub-notebook”.

Here are the specifications for the highest end 12.1″ Dell E Slim+ model.

  • 12.1″ WXGA (probably means 1366×768 resolution)
  • 1.6 GHz Intel Silverthorne Atom processor
  • 2 GB DDR2 RAM
  • 60 GB 1.8″ Hard Drive
  • 802.11g and Bluetooth
  • Camera
  • Linux (trimmed down fast booting)
  • BYO (Bring Your Own) Windows XP
  • Not sure if Vista Drivers are available

I’m not sure if the 12.1″ model has an SDHC flash card reader yet but it’s not a show stopper for me.

Windows Search 4.0 for XP, Vista, and Server 2008

Betanews has a good article about Windows Search 4.0 which is now listed as “important” in Windows Vista Update.  You can manually download a copy here for Windows Vista and Server 2008 and you can download Search 4.0 for Windows XP.  Couple this with Microsoft’s free enterprise search and you have a complete search solution for your whole business or organization at little or no additional cost.

Fairness is the ultimate end-game in network management

Update 8/12/2008 – Robb Topolski has graciously apologized for the personal attacks.

I have to admit that I was surprised at the speed at which I’ve been attacked on my latest FCC filing “Guide to protocol agnostic network management schemes“.  Within hours of my filing, Robb Topolski who now works for Public Knowledge wrote an attack blog “George Ou: Protocol Agnostic doesn’t mean Protocol Agnostic” which, along with a series of ad hominem attacks, basically accuses me of being hypocritical.  I won’t respond to the senseless personal attacks, but I am going to rebut the core accusation that favoring protocol protection in the context “Protocol Agnostic” network management schemes is somehow hypocritical.

Before we debate the meaning and context of the word “agnostic”, we must first ask ourselves what is the underlying problem we are trying to solve.  The answer is that we want a broadband network management system that is more equitable at distributing bandwidth fairly between users in the same service/price tier and we want a broadband network that ensures the best possible experience for every protocol and their associated applications.

Network Architect Richard Bennett put it best when he described network management as a two-phase process.  In his blog he recently wrote:

An ideal residential Internet access system needs to be managed in two different but equally important phases:1) Allocate bandwidth fairly among competing accounts; and then

2) Prioritize streams within each account according to application requirements.

Phase 1 keeps you from being swamped by your neighbor, and keeps you from swamping him, and Phase 2 prevents your VoIP session from being swamped by your BitTorrent session.

So to address the first problem, we need a way to measure bandwidth consumption between these “competing accounts”.  The common solution on the market place today computes bandwidth consumption implicitly and indirectly from the protocols passing through the network.  This method of looking at the protocol to determine bandwidth consumption was initially chosen because it is computationally simpler and cheaper than trying to track usage statistics of tens of thousands of simultaneous application flows in real-time. While this solution is mostly accurate, it isn’t completely accurate.

As the identification of protocols become increasingly difficult because of protocol obfuscation techniques and as the cost of computational power declines, we’re starting to see a shift towards network management systems that don’t use protocol analysis to determine which users consume the most bandwidth.  These new systems compute bandwidth distribution by tracking the bandwidth consumption of every flow on the network without looking at what protocols are passing through the system and these are the systems we refer to as “Protocol Agnostic”.  But being “agnostic” strictly applies to this first phase of network management.

The second phase of network management is to ensure that every protocol works well.  This requires knowledge of what protocols are passing through the system so that they can be given the necessary protections from other protocols sharing the same network.  Since protocols and their applications have unique network requirements, we must identify the protocols being used so we can give them the necessary protections.  It would be foolish to apply the concept of agnosticism towards this phase of management because doing so would make this phase worthless.

The reality is that a network can’t simply be a dumb pipe if we want every protocol/application to work well and no one has ever claimed that “protocol agnostic” should mean a dumb pipe.  P2P applications have the ability to not only grab nearly every bit of bandwidth available regardless of how much capacity there is; they can cause significant problems for real-time applications like VoIP or online gaming applications even when the P2P uses a small fraction of the network.  I have published experimental data showing that even low-throughput mild usage of P2P applications can cause significant spikes in network latency and jitter (latency and jitter explained here) which can severely degrade the quality of real-time applications.  Even without any knowledge of how networks operate, anyone who has ever tried to play online FPS (First Person Shooter) gaming or use a VoIP phone while a P2P application runs on the same broadband connection understands how miserable the experience is.  In fact, this is a common source of contention between roommates and family members.

This is precisely why Comcast needs to protect VoIP traffic from service providers like Vonage.  Even though Comcast has nothing to gain because their own telephony service runs outside of the Broadband network on a different frequency, these additional efforts to ensure proper VoIP operation are actually helping a competing telephony service.  In theory, Comcast could just leave this issue alone and let Vonage and other competing VoIP services suffer while its own telephony service remains immune.  In practice, many Comcast’s customers will blame Comcast if they had any problems once the new system is in place and Comcast needs to do everything it can to make their customers happy.

Robb Topolski claims on behalf of me that these problems are somehow caused by Comcast when he says that “George has shown that Comcast’s proposed Protocol Agnostic scheme has unacceptable side effects”.  Aside from the fact that I’ve said or shown nothing of a sort, it’s ridiculous to paint these noble and reasonable efforts in to something sinister.  The Free Press has used similar fear mongering tactics to discredit these new network management efforts by Comcast by questioning why Vonage needs these special deals with Comcast in order for their traffic to go through insinuating that this is some sort of protection racket.  The fact of the matter is that Comcast not only doesn’t block Vonage, they’re trying to prevent Vonage from being blocked by self-inflicted damage whenever the user tries to use aggressive protocols like P2P simultaneously with VoIP.

But these advocates of strict Net Neutrality and the “dumb” Internet continue to shout that this is tantamount to “discrimination” against P2P protocols if we provide protocol protection for VoIP or any other real-time applications.  They’ll often ask “why can’t you give P2P the same protection” but this question is silly when you look at the reality of the situation.  P2P applications don’t need this kind of protection and they’re thriving at 30 times faster than the VoIP application on a 3 Mbps broadband connection.

If we take an objective look at the situation, the P2P application is already consuming over 96% of the network capacity so would it wise to mandate a dumb network that allowed the P2P application to consume 98% of the network and completely break the VoIP application?  If the network could only fairly allocate 2 Mbps of bandwidth to its broadband customers during peak usage ours, would it be so wrong to expect the P2P application to drop down to 95% of the broadband link while the VoIP application remains at the same miniscule 87 Kbps bitrate?  I guess according to Robb Topolski, this is somehow unfair because the P2P application is forced to slow down while the VoIP application doesn’t slow down.

If this is the kind of petty game we’re playing, we could simply have a network management scheme that allocates “equal” amounts of bandwidth to P2P and VoIP whenever both protocols are active which means both protocols will operate at 87 Kbps and we could call that “equality”.  Obviously no one would want that kind of system and there’s no need to force this level of equality so the P2P application is allowed to consume whatever remaining bandwidth is left over from the VoIP application.  But taking what’s left over is beneficial to the P2P application because the leftovers are at least an order of magnitude more than what the VoIP application gets.

Robb Topolski then pulls out the DPI (Deep Packet Inspection) Bogeyman and claims that any sort of protocol aware system would violate user privacy by snooping beyond the packet headers and looking in to the user data.  That’s nonsense on multiple levels because a system wouldn’t necessarily need to look at user data to determine the protocol being used and even if it did, there’s nothing wrong with an automated system that parses through user data.  In fact millions of emails a day are parsed by anti-spam systems to prevent your email inbox from being inundated with spam but we don’t consider that an invasion of privacy because of the context and scope of the system.

But the smart network management system could simply look at the protocol headers without parsing through any of the user data to determine what protocol is being used because it doesn’t have to fear protocol masquerading techniques.  This is because the protocol agnostic part of the system already tracks usage statistics which allows the network operator to implement a priority budgeting system where everyone gets a certain amount of priority delivery bandwidth.  So for example, if everyone had a daily budget of 12 hours of priority VoIP service or 12 hours of priority online gaming service or 3 hours of priority video conferencing, they’re free to squander that budget in 30 minutes with a burst of high-bandwidth P2P usage masquerading as VoIP traffic.  After the daily allowance is used up, all traffic would go through but be treated as normal priority data.

Chief BT researcher Bob Briscoe who is leading an effort to reform TCP congestion control at the IETF actually explained this concept of saving up for priority quite nicely and network architect Richard Bennett has also spoken positively about this type of a system.  Priority budget schemes would actually help solve the fundamental fairness problem where some users consume hundreds of times more volume than the majority of users even though everyone pays the same price.  The network would essentially give the low-usage real-time application customers a responsive delivery service while the high-usage P2P customer would get all the volume they want and everyone would get what they want.

When we focus on the ultimate endgame of fairness, it isn’t hypocritical to advocate equitable distribution of bandwidth through protocol agnostic means while simultaneously advocating protocol-specific prioritization schemes that ensure the best possible experience for all people and all protocols.  These two goals are completely separate and completely honorable.

Slow Web throughput on ISA Server 2006: SOLVED

Ever since I put up an ISA Server 2006 deployment, we noticed that Web access through it was incredibly slow. When we connected test machines directly to the Internet connection (the FiOS line that I’ve mentioned before), it was blazing fast. But outbound Web access through the ISA Server was slow slow slow. Well, I fixed it!

When I started troubleshooting it, I first looked to the Event Viewer for failures. Other than a few minor items about DNS, I didn’t see anything in there at all. We tried all of the usual diagnostics, and it was clear that network latency was not the problem. I also tried a good number of other tests, including DNS lookups, all of which looked good.

Performance monitoring showed that the local SQL Server installation (for catching the logs) was Hoovering RAM. So I got into SQL Server and capped its memory usage at 512 MB of RAM. We saw that the overall RAM consumption was down; it now ran with plenty of free memory, but the Web access was still slow. Outgoing access for all other protocols was super fast, incoming traffic was super fast. What gives?

Next, I decided to take a look at the DNS problems in Event Viewer. You can never be too careful, and who wants to have errors anyways? I fixed the problems with DNS up, and some sites seemed to improve, but the overall access was still quite slow. The trend now seemed to be that frequently accessed sites (based on hostname) were speedy, but everything else was slow. This deepened my suspicions of DNS problems.

At this point, I had nowhere else to turn to. Monitoring and performance logging told me what I already knew, that everything was fine except the retrieval of non-cached Web pages. I was typing up an email about the problem, enumerating all of the potential points of slowness and why or why I could rule them out. Everything could be ruled out easily, except for the DNS situation. Who knows what the internal DNS stack is really doing, especially when the ISA Server is its own DNS server?

So I decided to take yet another look at the DNS configuration. The configuration of the local DNS server was perfect. No problems found, no errors in Event Viewer. On a whim, I checked the DNS entries of all of the NICs. The only entry of note, was that the NIC on the LAN had an entry for an alternate DNS server, an entry for a machine that was no longer in use. That entry was put in as a “fallback” entry when the server was first installed, and it pointed to the previous firewall, in case the previous firewall had a DNS entry that we had not moved to the new DNS nameservers yet.

But how much trouble could this cause, right? After all, the lookup should be occuring on the WAN NIC, not the LAN NIC, and this is the alternate DNS server, which should never be hit anyways! Well, I removed the entry just for correctness, and BAM. Problem solved.

Microsoft, sometimes you baffle me. Why in the world would the alternate DNS server for the LAN NIC affect perforfmance on the WAN NIC, especially when the primary DNS server for all NICs is localhost, and localhost’s DNS server forwards to the WAN ISP’s DNS servers when needed? And why would this only affect Web access, and not FTP, SMTP, etc.? Regardless, if you are seeing insanely slow Web throughput on your ISA Server 2006 install, check your DNS subsystem as a whole completely before giving up.

J.Ja

Google Video page getting hijacked?

This link to Google Video (WARNING – NSFW Not Safe For Work images) seems like it’s been hijacked in some way by a foreign language site.  I accidentally came across it when I was looking at some LCD monitor review videos on Google Video.  I someone doubt that Google approves of the content and the avatar image and I sent in a complaint to Google but the page and profile hasn’t been removed yet.  I’m not sure what to make of it or what, if any, exploits are involved.