Archive

Archive for November, 2008

A “lightweight” Web-based Office makes perfect sense

November 9th, 2008 5 comments

So Sam Diaz over at ZD Net takes Microsoft to task, because they have said that their online offerrings of Office will be “lightweight”. And then he says (in a nutshell), “what’s Microsoft’s problem, Google has gotten this right already!” The reality is, Google Apps are “lightweight” too. As Steve Ballmer mentioned, they didn’t even have footnotes (they got footnotes a few days after he mentioned it).

I can understand that Microsoft is lagging on getting a version of Office on the Web. But, let’s examine reality. Google Apps’ usage rates show that there is not much demand for Web-base office suites at this time, Office still makes a ton of money, and offering a “lightweight” Web Office still puts them on par with Google. Most importantly, the people who are attracted to a Web based office suite neither want nor need a “kitchen sink” application; if they did, they would just buy a copy of Microsoft Office or download Open Office.

The people who are not buying Office are the casual users. People who need to type up a shopping list every now and then, or maybe use a spreadsheet to balance their checkbooks. Yes, this is a huge percentage or users, probably the bulk of home users. These are the same people who do not need the tight collaboration of Outlook, and for whom Web-mail is perfectly suitable. And so on and so on. I am sure that there is some money that can be made, selling ads on these online applications. Unlike most online apps, an office suite is insanely “sticky”, with users spending many hours in them.

What is lost in the whole thing, is a misunderstanding of why people pay for Office in the first place. Businesses buy Office because of Microsoft Exchange. Period. End of story. If you are not using Exchange, all of a sudden, Outlook is merely an “adequate” email client, an “adequate” calandering application, and an “inadequate” contact management system. Word is a great “kitchen sink” word processor, but in reality, the alternatives (Corel WordPerfect, OpenOffice) are perfectly acceptable from probably 90% (or more) of users. Ditto for Excel.

In fact, Office’s “killer app” as far as I am concerned on OneNote (if you have not used it, I highly recommend it!). Over the last year, I have been training myself to transition from paper to OneNote; for every activity that I do this with, I am happy, but it is taking me some time. As an example of just how good OneNote is, I used to buy a paper notebook (usually steno pads with perforated sheets) every few months. Now, I buy one every 6 – 9 months. Even more telling, when I print something from OneNote, once I am done with it, I save the paper. Either I flip it over to used as scratch paper (for those tasks that I am not using OneNote for yet), or I print more items from OneNote on the other side of it. Off hand, I would love to have a Windows Mobile device, provided that it had a copy of OneNote that would automaticall sync with what I have on my desktop PC. That would blow my mind in terms of usefulness and functionality.

So other than OneNote, I really do not think that Office is particularly great. Overall, it suffers from having too many features and too wide of a user base. It just cannot make everyone happy, so no one is particularly happy with it. For the casual user, it is far too complex, even with the Ribbon. For the “power user”, too many tasks require too many clicks or keystrokes (for the power user, WordPerfect 5.1 was the best application on the face of the planet, except for possibly Emacs). So yes, I really do think that if Microsoft’s online Office is “lightweight”, not only is this not terribly bad, but I think that it is really good. Remember Microsoft Works? Where it was a disaster, was that it used a file format that wasn’t Office’s file formats. But as a lightweight office suite aimed at casual users, it was good. If Office Online replicates that, there is nothing bad about it at all.

J.Ja

Categories: Microsoft software, Software Tags:

AMD submits suboptimal SPECpower benchmarks for Intel

November 9th, 2008 25 comments

Performance benchmarks are the equivalent of the Indianapolis 500 for the computer industry, if not more important.  Rarely are the top performance numbers attainable in the real world, but winning is a critical symbolic victory in the war of perception.  The winner of the benchmarks will be perceived to have a technological lead across their entire line of products while the loser is perceived to be inferior across their entire line of products.

Benchmarks are like races and the winner is always determined by the best performance.  The only fair way to conduct benchmarks is to have each player put forth their best player to achieve the highest scores possible within a common set of rules and parameters.  By this measure, AMD’s recent submission of SPECpower energy efficiency benchmarks on behalf of Intel servers which portray Intel in a sub-optimal light while ignoring superior scores for Intel is highly inappropriate.

AMD says that an AMD server gets a score of 731 while the Intel server that AMD submitted on behalf of Intel gets a score of 561.  AMD claims this is a legitimate comparison because the systems share many commonalities and they defend their behavior by saying:

“if we were trying to show worst case scenario we would have turned off Power Management on the Intel server”

But the fact that AMD could have turned in even worse performance numbers for the Intel system is totally irrelevant.  If anything, submitting plausible performance numbers on the Intel system is far more insidious because it is a more effective deceit.

The obvious problem with AMD’s explanation is that the Intel system submitted by AMD shows a poorly optimized hardware configuration for Intel in terms of energy efficiency.  The system uses Fully Buffered memory DIMMs which are notoriously inefficient for the Intel system when the most efficient Intel systems use unbuffered DIMMs.  Intel launched the San Clemente 5100 series chipset exactly one year ago which uses the same unbuffered memory as AMD.  While it’s true that Intel initially went with Fully Buffered memory two years ago, it was obviously a mistake given the power consumption and Intel quickly fixed that mistake last year.  With Nehalem-EP two-socket servers coming at the end of this year, Intel will only use ECC unregistered or registered DIMMs and they will completely shun FBDIMMs.  The bottom line is that anyone looking to build an efficient Intel-based server today should use unbuffered memory and AMD conveniently forgot to do that.

The other possible problem in AMD’s explanation is that they used the “same JVM command line options” for the Server Side Java (SSJ) benchmark.  While this sounds like a fair comparison, it’s common for different CPUs from different vendors to require different command line options to achieve optimum results.  This may explain why the AMD-submitted Intel server was 5% slower than comparable Intel systems submitted by other vendors.  Combined with the suboptimal Intel hardware and possibly suboptimal software configuration, we can see the likely reason why the AMD-submitted Intel system did so poorly.

To get an accurate picture of what’s really going on, we need to look at the best possible SPECpower scores for AMD and Intel to determine who the actual winner is.  The table below compares the official SPECpower_ssj2008 energy efficiency benchmarks of the top dual-processor servers from Intel and AMD.

System (SPECpower_ssj2008) Vendor DIMMs Peak SSJ Peak watt Score
Intel L5430 2.66_GHz PowerLeader 2 285,970 161 1,135
Intel L5420 2.5_GHz SuperMicro 2 279,209 174 990
Intel L5420 2.5_GHz HP 2 282,281 189 930
Intel L5430 2.66_GHz Fujitsu Siemens 4 293,162 220 827
AMD 2384 2.7_GHz AMD 4 338,577 264 860
AMD 2384 2.7_GHz AMD 4 335,116 264 827

Because the SPECpower rules don’t really specify the minimal number of memory DIMMs that a server should have, AMD should have submitted servers with 2 DIMMs like everyone else to get the best scores.  Instead, AMD submitted servers with 4 memory DIMMs instead of 2 which gives them a 3.58 idle watts to 8.42 peaks watt handicap.  However, AMD neutralized that handicap using the newest Western Digital GreenPower 3.5″ hard drive which saves 4 idle watts to 6 active watts compared to the hard drives used by the other systems.

But for the sake of comparison, I estimated the scores of the 3 Intel systems that used only 2 DIMMs and added 3.58 watts in idle and gradually increased up to 8.42 watts at peak Server Side Java (SSJ) loads to simulate power consumption for a 4-DIMM server.  But when these servers get upgraded to 4 DIMMs, they have higher memory performance which translates to higher overall performance.  Based on the peak SSJ scores in the table above, I estimated a 2.5% boost in peak SSJ performance which slightly counteracts the negative effects of the higher power consumption in terms of performance per unit energy.  Based on this estimate (which should NOT be taken as official SPECpower_ssj2008 scores), I calculated that the top three systems in the table above would have achieved a score of 1006, 889, and 836 which is still higher than the AMD servers.  But if those top three Intel servers had used the same energy efficient hard drives used in the AMD servers, the score reverts back to something similar to the unadjusted numbers.

UPDATE 11/13/2008 – The AMD results were actually not Barcelona, but AMD Shanghai results.  I had thought they were Barcelona but I didn’t realize that I was looking at yet-to-be-launched Shanghai performance numbers.

In conclusion, AMD has made huge strides to nearly close the SPECpower gap, but they’re behaving inappropriately by comparing their products to suboptimal benchmarks that they themselves submitted.  That’s unfortunate because this new controversy has overshadowed the huge progress made by AMD.  Had AMD launched these mid 2 GHz Barcelona processors on time a year earlier, they would have been extremely competitive all through 2008 but it was not to be and they suffered for it.

AMD made some huge clock-for-clock core-for-core performance gains with their quad-core Barcelona chips in Server Side Java (SSJ) performance compared to their older dual-core Opteron chips.  Even when I factor out the clock speed difference, a Barcelona quad-core 2.7 is still 3.14 times faster than an Opteron dual-core 2.4 GHz system in Server Side Java performance. These gains have allowed AMD to become very competitive against Intel’s aging Penryn chips although AMD cannot claim the title.  It’s also interesting to note that AMD Barcelona also made huge improvements in web server performance and it is beating Intel Penryn servers on dual-socket SPECweb_2005.  However, Intel Penryn class CPUs still win by a large margin in the single-processor server market.

The reason for this disparity between single- and dual-socket servers is that Intel’s memory architecture is constraining their dual-socket performance, but that limitation will soon disappear with Intel’s Nehalem Microarchitecture server CPUs which should launch by the end of this year. Intel’s Nehalem CPUs have completely closed the memory performance gap with QuickPath architecture and they’ve even managed to leapfrog AMD’s memory architecture with 50% more memory channels and higher performance DDR3 memory.  Coupled with the improvements in the Nehalem execution engine, there is little doubt that Intel will be regaining a comfortable lead in the Server and Desktop markets.

AMD will narrow that gap with their Shanghai processors if they can launch on time but few analysts predict Shanghai will come close to beating Nehalem.  Whether or not AMD can launch Shanghai on time or get close enough to Intel Nehalem to be reasonably competitive remains to be seen.

The best video game I’ve played in years

November 4th, 2008 No comments

A few nights ago, I read a review of a new game, “The World of Goo”. The description sounded interesting, kind of “Lemmings” meets “Pipe Dream”. Tonight I tried the demo. From my perspective, this is the best video game that I have played in years. It fits my needs perfectly:

  • Level based playÉ some levels only take a few minutes to complete
  • Quiet Ð this is important when Jarrett (my son, whose room is adjacent to my office) is asleep and I have time to play games
  • Intellectual, but not too in depth Ð again, I play games LATE at night, I want to be challenged, but not forced to keep a notebook
  • Stupid simple Ð I don’t even think the game has a tutorial
  • Little to no need for reaction time Ð “twitch” games don’t hold my interest too often
  • Fresh, innovative concept Ð this game has it
  • Gameplay is not based upon direct competition against others, but competition against yourself
I cannot recommend this game enough. For $19.95 on Steam, it is a steal, too. I have not played Portal, so I can’t compare it, but I will say that it holds up well to Lemmings in terms of the concept. It’s a lot like Phun (a physics simulator), but with a goal and a game, not just “play in the sandbox”.

If you are someone like me, who would like to play a nice video game that doesn’t make a lot of noise, doesn’t have high time requirements, and can be played on a “pick up and go” basis, this is the perfect game. Congrats to 2DBoy for putting together such a superb game.

J.Ja

Categories: Games, Reviews Tags:

Some Vista quirks that drive me nuts

November 3rd, 2008 5 comments

I’ve been a Windows user since version 3.0. That’s a pretty long time. Indeed, I even used Windows NT 3.1, which was a fairly rare product “in the wild”. Over the years, I have watched the bug count drop dramatically. Not just the true “bugs”, but the stuff that the programmers joke about and say, “that’s not a ‘bug’, it’s a ‘feature’”. Still, Vista still has a few of these quirks (and a few new ones), and they drive me nuts.

Re-arranging the Start Menu
UAC is great, in my mind. I love the fact that if something serious is happening, I need to sign off on it. I think that UAC is probably something that most people get “Click-Yes-Itis” very quickly to, but I don’t. Now that being said, it drives me absolutely nuts that re-arranging the Start Menu involves signing off on so many things per drag/drop operation. First, it needs administrator approval. Then, UAC comes up. Then I need to confirmt he move. And if the destination folder already exists (like if a service pack re-created the Start Menu entry in the original location), I need to merge the folders together. Argh! Luckily, I only need to put up with this on an occassional basis.

When things on the Start Menu get moved…
This has been a problem with Windows since Day 1. The OS simply has zero awareness of the Start Menu, other than it being a standard directory tree. This is fine, and I am sure that it saved them a ton of programmer hours. The problem is, if the user re-arranges the Start Menu in any way, it causes pure chaos. For example, I create, at the top level, functional categories such as “Multimedia”, “Communications”, and “Programming” on my Start Menu, and then move the entries for installed applications as needed. Very few applications (on most installs for me, only Microsoft Office) still warrant their own top level entry. This works great for me, until the application gets updated. Service packs and patches re-create the original top-level entry. Uninstall does not remove the entry because it is not where the uninstaller thinks it should be. The answer is for someone at Microsoft to spend a week or two writing some code to make this smarter.

Lack of a proper “Shadows” file type
OS/2 had a great file type called a “Shadow”. *Nix has a similar idea in the form of a file link. The idea is that a file has 1 physical entry, but you can have other files which appear in other directories (or in the same directory) which “point to” the file. No, not like Window’s useless shortcuts. The problem with shortcuts is that they are too much of a hybrid model; you have a 50/50 short of working with the shortcut file itself, not its target. With a “shadow” file, operations always act upon the target, with a few rare exceptions. Ask a shadow where it is, and it gives you the shadow’s directory path, not the target’s. Deletion always removes the shadow, not the target. And so on. But the idea is that when you act upon the shadow’s metadata (say, right-clicking on it), you get the target’s information, not the shadow’s. That’s what I really hate about shortcuts, you need to follow them to the original file to do a lot of useful things. Bleh.

Recyle Bin Navigation
It is still a pain in the neck to get around the Recycle Bin. I go in there about twice a year, but when I do, I would like to be able to find what I am looking for. Am I being picky? Maybe. But I would still like to see this improved.

The Registry
I remember back to 1994-1995 when Microsoft hailed the Registry as being this awesome thing that would prevent people from needing to work hard to find INI files, and to make managing these things more uniform. It did that. Now, it is uniformly difficult to find what you need in the Registry. Unless you are a Registry magician familiar with all of the odd hierarchies, it is nealry impossible to find what you are looking for in there.

Regedit
On that note, why has Regedit remained unchanged since 1995? Would a proper “Find All/Replace All” hurt? Given the level of knowledge of the average person running Regedit, would a regular expression Find/Replace be too high level of a feature? Why is Regedit more primitive than Notepad?

Backups
Why does Vista Backup consider backing up to a directly attached drive to be the Holy Grail of backing up, and treat backups to network locations as second class citizens, particular for the full system state backups? Why does Vista Backup not have a way of smartly rotating my backups and “folding” them together in a way so that I don’t have to wipe out all of my backups every few months and restart them? Why do both Vista Backup and Windows Server 2008 Backup feel like a massive step backwards from the capable but feature poor backup in previous versions?

What Windows quirks drive you nuts?

J.Ja

Strike the word “future proof” from your tech vocabulary

November 1st, 2008 8 comments

Intel and Asus are trying to get community feedback on what people want in a PC and one of the more popular ideas floated is a “Future proof PC”.  I’ve been trying to tell people for nearly two decades that there is no such thing as “future proof” in the computer industry and the sooner they strike that idea from their head the better off they are.  This rule applies equally to the consumer and the IT industry.

Why am I so adamant about this?  Because I’ve seen people shoot themselves in the foot over and over again and while most of them learn, it’s an expensive lesson.  It’s the same old story and I’ve seen people ignore my warnings and buy that $5,000 to $10,000 PC because they think it’s future proof for 5 to 10 years.  The reality is that it’s almost always an inferior piece of junk compared to any $2000 new PC with dust balls and gunk inside the computer within 2 years.

I see companies fork out $50,000 for a server with two empty sockets so that they have “upgradeability” when they could have bought an equally fast server for $20,000 without the upgradeability.  Less than 2 years later when they’re out of capacity, it costs them another $20,000 to upgrade but they could just as easily buy a brand new $20,000 computer that’s faster than the upgraded computer.  Oh but the IT guy will argue that the new PC requires a migration and the other computer is a drop-in replacement.  The problem with this argument is that you can just as easily align your major software upgrades with new servers which is much simpler and safer to do than an in-place software upgrade on the production server.  If anything bad happens on the production server during an in-place software upgrade (which is quite common), you’re toast.  If anything bad happens on the new server, you got plenty of time to iron it out while your production server is humming along.  When you get everything right, just flip the production server over with a few DNS changes and you’re done.

But dtoid, the gamer who posted the suggestion on the future proof computer, is talking about an upgradeable PC.  The problem with this idea is that by the time you upgrade the motherboard, GPU, memory, and CPU to get the necessary improvements, you’ve only managed to keep a nasty gunky keyboard and ATX chassis.  Had you kept all the parts intact, you would have a nice hand-me-down computer to give away or sell on EBay.  The worst example of dumb upgrades I can remember is people who spend $250 on a CPU socket adapter so that they could put a new CPU in an old motherboard when a new motherboard is better and cheaper.

Buying a new computer generally isn’t that much more expensive than an upgrade because the only extra money you’re spending is the chassis, power supply, and optical drive which is hardly more than $200.  Besides, getting a fresh install of Windows is half of the new PC experience.

Categories: Build enthusiasts, Hardware, Tips Tags: