Category Archives: IT

Information Technology

Google Android 6 stage update process

So I bought a new HTC Nexus One (brown with US warranty) last week and it came with a custom Vodafone UK ROM with Android Kernel Version: 2.16.405.1 CL223106 release-keys.  Unfortunately, this particular firmware prohibits any OTA updates or even manual updates and it was a nightmare trying to track down the problem.  Luckily I fell upon this user comment on Amazon’s website which led me to this page explaining the upgrade process which calls for a 5 stage process to get to Android version 2.3.3 which allows you to run the 2.3.4 update.

So to summarize, the upgrade process goes something like this where each stage took about 5-30 minutes (depending on download time).

  • Downgrade to 2.2 build FRG33 using passimg.zip method
  • Upgrade to 2.2.1 build FRG83
  • Upgrade to 2.2.1 build FRG83D
  • Upgrade to 2.2.2 build FRG83G
  • Upgrade to 2.3.3 build GRI40
  • Upgrade to 2.3.4 (Google announcement here)

With an upgrade procedure this onerous, no wonder so few devices are running newer versions of the Android Operating System.  The result is that there is an immense level of Android fragmentation leaving 99% of the devices vulnerable to a serious security flaw in the ClientLogin API.  ClientLogin was apparently designed without any encryption such that AuthTokens are transmitted in the clear.

The market share for non-vulnerable versions of Android OS might be a little better than 1% now but not much better according to Google’s statistics.

Image credit: Google

MacBook Air 2011 Model Launch Immenent

Normally I wouldn’t go out of my way to put forth a baseless prediction, but while browsing the prices I did notice that the refurbished models of the MacBook Air have all dropped by about $20, previously the entry level MacBook Air model was listed at $849 with a 15% discount. I have watched the refurbished store in the past and noticed that shortly before a product launch, the prices for a particular product would drop as a successor was released shortly there after. This would lead me to believe that Apple will probably launch a new MacBook Air right around the time of the WWDC. My only other explanation for Apple to reduce the prices of their refurbished MacBook Airs would be that the products simply aren’t moving from the refurbished market which would be something I have yet to witness from Apple.

The only reason why I am even bringing this up on this site is that after a quick Google search, I haven’t seen anyone else make this observation and thought I would try and be the first to call this prediction.  A number of other sites have predicted the next MacBook Air would be released around June/July with a SandyBridge Processor and Thunderbolt interface.  Most likely this will include an integrated  Intel HD3000 graphics chipset which will likely result in a significant boost in CPU performance all the while being inferior in 3D graphics.  Then again, who buys a MacBook Air for gaming or graphics editing?

UPDATE: Well, apparently I was incorrect in assuming the exact release date.   Hopefully Apple will refresh the model sooner than later.

So a SQL Server Transaction Log ate Your Free Space.

This weekend I came across an unusual circumstance that I thought I would share with many of those part-time SQL Server admins. I currently maintain more than a couple of SQL servers. Because SQL Server has a good maintenance program I don’t spend the money on a third party software for backup. Instead I setup the maintenance plan to create a backup every 6 hours and then push the file to a network share. For some reason or another, the network share came detached and the backups filled up the local data volume. This effectively shutdown the server. I cleared up the space, restored the mapping, and didn’t think much more about the problem. I noticed that I was getting a backup file from each database but failed to pay attention to the transaction log.
This is where my new problem that consumed my weekend started. Friday night at 7pm I got another phone call about the SQL server being out of disk space again. Again I had no space on the volume, but the space wasn’t consumed by the backups. Instead, the transaction log which is normally a couple of gigs in size had ballooned to 100GB in size. I had attached an external USB drive to push a backup of the transaction log to and tried to shrink the transaction log from the SQL Server Manager. This only give me about 3 GB of storage back, but they were quickly consumed as soon as the end users started using in their application again. I then kicked off a backup of the database and then transaction log. I now had 99% of the space free in the transaction log file, but still could not shrink the database. I had fought and fought with the database trying to get that free space back.

Finally at about 2am, running out of ideas, I deleted the transaction log file and started up the database again which effectively locked the database for a lot of people. Having migrated the database before, and knowing that a simple restore of the database could easily fix the problem, I took the most recent backup which was actually taken after end users were cut off from the server and restored the database. After the restore, I again had the same problem of a database with a 100 GB transaction log file. This time however, I for some reason threw caution to the wind and performed yet another shrink to the transaction log file. Finally, I freed up 75% of the space on the volume which allowed everything to return to normal.
Why I had to backup and restore the database before I could perform an effective shrink of the database, I do not know. If this has happened to other people, I would like to know the reason behind this.
My corrective actions include scripting a compression command on the backups to reduce their size.  I also plan on creating a trigger to notify me by email when the disk space is low, 20% is one of my favorite guide lines as far as that is concerned. I am considering running a network mapping script to reattach the volume of the server before the files are moved over so that the network volume that I monitor won’t be so easily missed with the other backup files that I file on the backup storage volume.  I don’t like using compression because of how having to decompress a file to restore it effectively adds to the lengthy process of getting the database back to working order.  Then again, having a few extra copies of the database around is also handy.

I am open to other input. I thought I would just share my wonderful late night experience with others in hope to get some improvements or perhaps help out other admins who might run into the same problem.

Solution for an empty “Network Connections” in Windows

Yesterday, I had to do some work on our Forefront Threat Management Gateway machine. When I brought up the TMG console, it gave me a strange error: “Refresh failed” with an error code of 0×80004005. It was inexplicable. A few days earlier, we noticed that the “Network Connections” in control panel showed no connections at all, but ipconfig showed them as expected. I ended up placing a call to Microsoft support. They suspected that the TMG console issue was caused by the inability to enumerate the network connections, and I was inclined to agree. Their specialist for these things said that there’s a registry key which sometimes gets corrupted, and you can delete it and reboot the server to fix the issue. After carefully reviewing to ensure that nothing else was the issue, that’s just what we did. After the reboot, the network connections showed, and the TMG console issues were solved as well. To do this fix yourself (the usual disclaimer: back up your registry before editing!), look up the following key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Network and delete the “Config” value.

J.Ja

Novell’s Patents and Why CDTN Holdings Wants Them.

The web was a buzz earlier today with news that Microsoft wasn’t the only company being involved in CDTN Holdings and some including ZDNet and ComputerWorld blogger Steven J. Vaughan-Nichols tried to speculate just what patents each of the member companies of CDTN Holdings would want and why.
First many thought that VMware would jump at the opportunity to get an OS to complete their stack and as I found out talking to a few PR employees from VMware, they pretty much already have everything they need from Novell, or so they say. Of course, VMware is owned by EMC who is a partner in CDTN holdings, so the VMware reps didn’t exactly inform me accurately.  After a quick search I did happen to find patent number7,793,101, which is Verifiable virtualized storage port assignments for virtual machines . I think I could see why VMware might want a crack at that patent portfolio now. I noticed that there are several more storage based and virtualization based patents for VMware and their parent company EMC to hand pick through. Keep in mind that Microsoft is also competing in the virtualization space as well.
In the storage space there are a few other gems including:

  • 7,844,787 Techniques for data replication with snapshot capabilities
  • 7,844,580 Methods and systems for file replication utilizing differences between versions of files
  • 7,809,910 Backup archive management
  • 7,774,568 Clustered snapshots in networks

VMware is also dabbling in identity management, also something that Microsoft has been working with for some time. Oracle and Apple also have identity management needs and would probably not hesitate to pick up a couple patents for their own related products.
With Identity Management we have a whole slew of goodies to pick through including:

  • 7,770,204 Techniques for securing electronic identities
  • 7,774,826 System and method for determining effective policy profiles in a client-server architecture
  • 7,793,340 Cryptographic binding of authentication schemes
  • 7,793,342 Single sign-on with basic authentication for a transparent proxy
  • 7,774,827 Techniques for providing role-based security with instance-level granularity

All four companies might be interested in improving their application deployment technologies with the following patents:

  • 7,730,480 System and method for creating a pattern installation by cloning software installed another computer
  • 7,739,681 Delayed application installation
  • 7,730,179 System and method for policy-based registration of client devices

The point I am trying to make is that each of these four companies have much to gain for the capital they put together to get access to these patents.
Many of us know that Microsoft is all about their Operating System, their Active Directory architecture, Search, their entry into Cloud computing.
EMC is the storage giant, but they also own VMware, RSA, Atmos, vBlock, Mozy, RecoverPoint, Documentum and have just as much if not more to gain than Microsoft.
Oracle while everyone knows is a database company, has bought more companies than anyone else, and leverage patents from Identity Management to Virtualization. Don’t forget that they own Sun Microsystems and happen to have Virtual Box.
Lastly we have Apple, who seems to stand out as while being worth more than anyone in this venture, appears to have the least to gain. However, when considering identity management, Apple would be quick to take advantage. Novell has quite a few data synchronization patents that could help out their MobileMe services. Single SignOn could be a big plus for them as well. They don’t really seem to have as much to gain from what I can tell, but then again, Apple doesn’t think like most companies. We could see them try dive into the enterprise with some of these patents or perhaps they could push themselves to the cloud.
All in all, we have four companies that are going to benefit greatly from the jewels of Novell, their patents. And while everyone was too busy worrying about the UNIX copyrights, the patents which I consider much more important were being handed over pretty much going unnoticed by the media.
Trying to figure out the direction that Attachmate will take Novell is very scary, especially handing out all of the patents like they did. As a Novell customer myself, I am concerned. Then again, who really knows the direction of the tech industry in the long term.

The Aftermath of this transaction is most interesting. Novell was a real hot potato that no one company wanted entirely. The market share of Novell has been slipping since the 90′s, and their recognition is even less. When talking to a salesman for a backup software company, he failed to even recognize the name and recommended that I speak to a tech. Yet, much to the dismay of many, when the patents for Novell were up for grabs, these four companies were first in line.  Microsoft, Apple, EMC, and Oracle are bitter enemies on several fronts, and yet they put aside their differences to pick apart this former powerhouse.

Fix for 0x80072f0c error (502.3 – Bad Gateway) for reverse proxy to SSL with IIS

I’ve spent most of a week struggling with this error. I set up IIS to reverse proxy to a backend server using the URL Rewrite module and the Application Request Routing (ARR) module. The first problem I encountered was that when using the “Reverse Proxy” wizard/template under URL Rewrite, it kept blowing up, giving me an error 500. The solution for this was to first go to “Server Variables” and add “HTTP_ACCEPT_ENCODING” as an allowed server variable. Next, I had to go into the configuration and set HTTP_ACCEPT_ENCODING to be passed to the destination server with an EMTPY value. You can’t do this direction from the configuration screen, because that demands a value. You can do it in web.config (or anywhere in the configuration chain). I did it by going to the “configuration editor” in IIS Manager to edit the value raw with no validation.

The next problem was much trickier. The reverse proxy template was able to handle carrying over SSL just fine to the backend server, but when I tried to access those links, it would blow up, giving me an error 502.3. Turning on detailed error reporting showed me an error code of 0x80072f0c and the text “HTTP Error 502.3 – Bad Gateway”. Full details showed more confusion under “possible causes”:

The CGI application did not return a valid set of HTTP errors.
A server acting as a proxy or gateway was unable to process the request due to an error in a parent gateway.

This made no sense to me at all. After hours of work on this issue, I finally found the problem. The virtual directory on the destination server (the one BEHIND the proxy) had been set to “Accept” client SSL certificates; this needs to be set to “Ignore”. While the site itself was set to “Ignore”, the virtual directory had been created with “Accept”, causing the problems.

J.Ja

Bizarre DHCP server error solved

Here’s a solution for one of the oddest errors I’ve ever seen. We had a PC which had been wiped and the OS reinstalled, but it could not get an IP address from DHCP. The error message we got was:

“An error occurred while renewing interface Local Area Connection : The name specified in the network control block (NCB) is in use on a remote adapter.”

So we wiped the OS and tried again (with a different version of Windows this time), same error. We did a bit of digging, and we found a reservation in DHCP (this is a DHCP server on Windows 2003 R2) for the machines MAC but with the PC’s original name. We deleted that reservation and tried again. Same error. We did everything we could to make it “take”… restarted the DHCP service, rebooted the client, etc. Packet captures showed nothing unusual either.

Upon further investigation, we found a second reservation in DHCP which looked odd; it had the same IP address as the first one that we deleted but it had an entirely different MAC and name. We deleted that reservation anyways, and BANG. It works fine.

J.Ja

The SLA Myth

When I worked for (name of very large managed services vendor), we’d play all sorts of nasty games with SLAs. For example, when I was at the call center, if the SLA said that 85% of calls were answered within 90 seconds, then if the call center was significantly exceeding 85%, people would be laid off or transferred to other departments. There was all sorts of calculations in the system that figured out when it was cheaper to miss SLA and pay a penalty than to work hard to meet SLA. For example, if a high-end router cost $30,000 to keep in stock, it made little sense to buy enough of them so that there was always one within a 4 hour commute (day or night) to the customer’s sites when the penalty for blowing the 4 hour SLA was, say, $1,000.

The problem is, the customer sees SLA as a “guarantee” which the vendor will do everything in their power to meet, and the penalties are just there for amusement. Prorated credit or even outright fees are no substitute for the downtime in many cases. The flip side is, the vendor sees SLA as just one more component in their profit margin calculations. It’s this disparity that causes intense customer frustration and rage and the employees of the vendor to feel “set up for failure” since they aren’t “in the know” and are busting their chops to do their best with deliberately limited resources.

SLAs can often contain tricky wording, and if you are not familiar both with contract law and SLA-specific contract language, you can get burned badly. I once was on the vendor side of negotiating an SLA agreement, and I made sure that it was almost impossible for us to miss SLA, even though the user experience could be miserable. For example, writing the “average time” calculation to include image files, JavaScript files, CSS files, etc. so that while out backend page might take 15 seconds to be produced (obviously unacceptable), the SLA was still met because of all of the static content bringing the average down. Scheduled downtime is another killer than many customers do not notice in SLA language. Fewer and fewer companies can afford downtime, even if it is scheduled at 2 AM.

I’ve seen shady games on both sides of the SLA fence. If you think an SLA is a guarantee, you are wrong. It is merely a contract that says, “if we do not meet these metrics, XYZ will happen.” And “XYZ” is usually not what you and your business need it to be. At the end of the day, you need to rely upon picking a quality vendor with solid people more than an SLA to protect yourself in these scenarios.

J.Ja

Terry Childs, network admin convicted.

Update – It seems the same juror Jason Chilton that commented on slashdot is the real deal and gave a very compelling case as to why Childs was convicted.  Childs had emailed passwords to the COO before but when he found out that he was being reassigned, he stopped being cooperative.  The next day he even taunted his boss and COO that they didn’t have access.

“So he knew nobody else could get in, and I think he had the assumption that they would say, “We need you back to maintain this network.” And that obviously did not happen.”

So Childs was refusing to give access to the COO who he gave access in the past when his job wasn’t threatened.  But because he didn’t want to be reassigned, he was now holding the network hostage and refused to give access despite demands from human resource, the police, his boss, and the Chief Operating Officer.  This whole excuse that there was no formal policy in place is nonsense because most tasks in the workplace aren’t explicitly spelled out.  If HR, the COO, and the police want you to relinquish custody, you do it unless you want to risk prison and that should be common sense.

Now Chilton said in the interview that Childs is a good trustworthy person and that it was his managers fault for giving him so much free reign.  Well I’m sorry to disagree with Mr. Chilton, but loose management is not an excuse to be a punk.  The fact that Childs had withdrew $10,000 and left for Nevada the day before his arrest tells me that this man is scum and should never be trusted with any company’s equipment.


The jury has found Terry Childs, a former network engineer for the city of San Francisco, guilty.  Childs had refused to grant the city access to the city’s Wide Area Network (WAN) and served several month in jail for his refusal to cooperate.  Now if you’re a network engineer, you might ask why didn’t the city simply perform a password reset/recovery on the equipment and I’m wondering the same thing too.  If I had to guess, they didn’t want to risk losing the configuration of the network and the easiest way was to get Terry Childs to give them the password.

Speaking as a former IT professional, nearly all of us with the exception of Mr. Childs have enough common sense to know that when the owner of the system or when a boss issues a direct order to grant access to the company system, we do it.  For that matter if the boss asks for something within their authority (short of something criminal), and especially if they have the blessing of their boss and up, we do it.  We might lodge a formal protest if the thing we’re asked to do will endanger the security of the company, or something that would cause the company to lose money because the boss is an idiot, but we do it after our formal protest is acknowledged.  The other option, voluntary or not, is that we leave the company.  But if we’re asked to leave the company, we have to hand the keys over.  If we try to hold an IT system hostage, that’s against the law.

Yet despite this common sense, it seems that many in the slashdot community have rushed to Terry Child’s defense as some kind of “stick it to the man” cult hero.  But one particular post form someone claiming to be a juror on the case, who actually looks the part judging by his comments, had some interesting things to say below.  (Note that since this is an informal blog, I didn’t track the man down and verify authenticity like I would have when I was formally a journalist.  Please excuse me for being lazy here.)

Now that I am able to speak about this case, I can give you my take on the matter as having been a juror on it. Having not been able to read about the case during its duration, I can’t replay to everything that’s been said about it, but I will at least provide my perspective.

This case should have never come to be. Management in the city’s IT organization was terrible. There were no adopted security policies or procedures in place. This was a situation that management allowed to develop until it came to this unfortunate point. They did everything wrong that they possibly could have to create this situation. However, the city was not on trial, but Terry Childs was. And when we went into that jury room, we had very explicit instructions on what laws we were to apply and what definitions we were to follow in applying those laws.

This jury was not made up of incompetent people or idiots. Every single person on there was very educated and well-spoken. I myself am a network engineer with a CCIE and thirteen years experience in the field.

This was not a verdict that we came to lightly. There were very difficult points to overcome in reaching it. We were not allowed to let our emotions or biases determine the matter, because if they could there may have been a different outcome. Quite simply, we followed the law. I personally, and many of the other juror, felt terrible coming to this verdict. Terry Childs turned his life around and educated himself in the networking field on very complex technologies. One different decision by him, or more effective management by the city could have completely avoided this entire scenario. But those are not factors we could consider as a jury. We applied the law as it was provided to us and our verdict was the unfortunate, but inevitable result.

I’m sure many people posting are of the mindset that he’s not guilty because he shouldn’t reveal the passwords, some policy says this or that, or whatever. You’re entitled to your opinion, but let me tell you that I sat through FIVE MONTHS of testimony, saw over 300 exhibits, and personally wrote over 200 pages of notes. I will guarantee you that no matter what you think of the matter, you do not have the full story, or even 10% of it. I am confident that we reached the correct verdict, whether I like it or not.”

Later he added

“One really important thing to note here is that it wasn’t a concern that he did not provide “his” passwords. The real problem is that he did not provide access — in any form, even in the form of creating new accounts for those requesting it.”

The gentleman also added that he actually agreed with the law, and thought that Mr. Childs broke it.  But as far as I’m concerned, if it takes the Mayor of the city to visit your jail cell and nearly two weeks to divulge the key to the city’s equipment, he’s legally and morally guilty of obstructing the city unless we’re talking about the city of Berlin circa 1944.

The thing that many IT people forget is that they don’t own the system.  The owner of the system is the user of the system who report to their superiors who ultimately report to the owner of the company.  The IT person is merely the Shepard of the system and they ultimately have to allow the owner make mistakes if the owner insists on it.  IT people also forget that without the business, there would be no system to protect in the first place.  Mr. Childs forgot this and he took ownership of a system that he did not own against direct orders of everyone in the chain of command above him.

It would be as if a limo driver refused to hand over the keys of the car to a 19 year kid who is prone to fast driving.  But the kid doesn’t like that driver so he gets his father to fire the limo driver, but the driver refuses to hand the keys over to the father.  The father fires the driver and hires a new limo driver, but the original driver even refuses to hand the keys to the new limo driver.  At that point the limo driver has effectively commandeered a car that does not belong to him which makes him legally and morally wrong.

The driver has a moral and legal case to refuse the 19 year old, but he certainly can’t refuse the father much less the police even if the father might be wrong for spoiling his child.  We simply cannot give employees that kind of power over their employer’s property.  Imagine what every IT person in danger of losing their job would do if Childs had been vindicated by the courts.  So it doesn’t matter if a few Internet geeks cheer him on as someone who “stuck it to the man”.  If they were in a similar situation and I were on the jury, they would be convicted.