A silent PC is one that makes absolutely no noise, and by necessity has no moving parts (including fans). Such systems usually use very low-end hardware limited to trivial tasks such as running a cash register. The system introduced today, a Solid-State PC (SSPC) is a powerful quad-core i5 PC which runs most software faster than the majority of modern PCs, yet uses less than 25W idle.
Like many others, I had been holding my breath for the greatness of the Macbook Pro hoping some of the rumors where true while others were not.
First let’s take a look at the good which would be the obvious inclusion of the Sandy Bridge processor. The Core 2 Duo was aging gracefully, but still needed to be retired only to be replaced by a much speedier i5 offering two generations of performance boost over the Core 2 Duo. The immediate added bonus and probably the second most promoted item would be the inclusion of Light Peak, or as we have now rebranded it, Thunderbolt. With an interface that allows for 10GB of bandwidth across the interface, moving data to an SSD has never been so fast. In fact, I might want to run my games off of the external drive because of the speed. Also a couple of maintstays with the New Macbook Pro are the Firewire 800 port and two USB ports. We have the same Super drive without any mentioning of the BluRay drive at this time. Clearly Apple wants to distance itself from Sony and promote their iTunes store here. All MacBook Pros include an illuminated keyboard which they have for a couple of generations now. The resolution starts with the very familiar 1280×800 and moves upwards. We also include the familiar SD card slot which started with the 2010 generation of MacBook Pros The one last good thing that I have to mention is that they have bumped up the hard drive capacity to 320 GB by default. However, if you want to get an SSD, they are by no means any cheaper of an upgrade than they were a year ago.
Now time for what I consider the bad. The Macbook Air 13.3″ laptop has a superior 1440×900 resolution screen that makes me almost want that particluar laptop instead of the 13.3″ MacBook Pro. Also, and I personally hold Intel responsible for this, but the 13.3″ models also suffer from using Intel’s intregrated HD 3000 video card. This is an unfortunate departure from the nVidia chipsets in the last four generations of Macbook Pros. At this time, I have not met an Intel video chipset which I have liked. They are all slow performers and lack the power that I need just for my day to day operations. I may try the latest Macbook and change my mind, but I highly doubt this. I can usually tell when I am running a PC with an Intel graphics chip or an alternative.
Last and this is what I probably consider why I recommend anyone with a current Macbook Pro to stay away from this upgrad is that Apple has slashed the battery life with the new upgrade. They are now 3 hours less than the previous generation. That to me means that I might as well stick with my iPad for the long trips or try getting a different brand of laptop. -See Update. I currently think a Lenovo Thinkpad T420 has my name on it. As much as I was looking forward to the new releases, Apple has done little to impress me and much to disappoint me.
As for Steve Jobs, please get well soon as I feel your company is beginning to disappoint me.
UPDATE: There was a bit of a misunderstanding on the battery life. Apparently the battery holds the same charge as before and the laptop has the same power draw as before, but the tests were changed. As noted in a computer shopper review. The battery life is the same in both 13.3″ laptops. The new testing is that using the DVD drive during the operation of the laptop while the older test was based on “average” use. Average use would be something akin to browsing the web or performing other low CPU intensive tasks. I hope that holds accurate as I would hate to see newer generations of laptops moving towards power draining CPUs again.
I recently had a strange problem with one of my PCs. It was acting slow and sluggish, then the RAID 1 dropped a drive out saying it was failed (I’ve used RAID 1 on all my PCs for a while now, an dI highly recommend it). I shut down, inspected the failed drive, and turned the PC back up again, and it wouldn’t boot. No matter what I did, it wouldn’t come up. The next morning, it had shut itself off, and when I turned it on, it worked perfectly fine… and then shut itself off again after about 30 minutes. Clearly, I had a heat-related issue. But I wasn’t seeing any of the symptoms of CPU overheating, like random reboots or application errors; the expected shutdown was the only CPU-heat symptom, while the rest of the problems (drive errors, for example) pointed to motherboard issues. I installed the MB tools to monitor it, and it was clear that the CPU was indeed overheating; it hit 97 C within about 10 minutes of booting! Eventually, the PC refused to boot. I ordered a new motherboard, and thanks to Amazon Prime, it would be delivered less than 24 hours later for only $3.99 S/H.
Even though I was certain I knew what the fix was, I did a quick consultation with my friend Chris Ansbach via IM. He really knows his stuff, and he pinpointed the exact cause of the problem, which is going to help me prevent it. If you need to work with someone who knows their stuff, he’s your person and I’d gladly put you in touch with him. Looking at the motherboard layout, the
two bridge chips northbridge chip is are located right next to the CPU, and is both are passively cooled. Inspecting the CPU and heatsink showed the cause of the overheating. The heatsink is the stock Intel model, and the plastic clips can eventually lose a little bit of tension. While the heatsink will still be on, and feel firmly attached, it will no longer make good contact with the CPU. Meanwhile, the thermal grease gets dried up (mine flaked off) because of the heat, and its is less effective, compounding the problem. Eventually, the CPU starts to overheat. Because of the location and cooling systems on the bridge s, they were it was overheating too, causing that flakiness. After replacing the motherboard, the system is working like a champ; I got very lucky that the CPU was not damaged!
So, what’s the takeaway here? Two things:
1. Motherboard design matters a lot more than I thought. From here on out, I am going to be looking for motherboards where the bridges are actively cooled, and not right up against the CPU.
2. Heatsink design matters, even in a non-gaming, non-overclocked machine. Two big things that I learned to look for: a backplate to secure the heatsink to the CPU that uses screws or some other fastening mechanism that will not loosen with time, and fan that blows up or sideways, not down; this will ensure that if the case air is hot, it isn’t making the CPU any hotter. I knew about some of the other stuff (heat pipes to elevate the heatsink away from the CPU, larger design, etc.) but these were two things that I just was not aware of, particularly the backplate.
Hope this helps someone avoid the same kind of meltdown I had!
Well, at least in the San Francisco South Bay at Fry’s though they usually have these deals elsewhere at other times. There’s also an additional $10 rebate but I don’t have much faith in those. Newegg sells this 32-nm “Westmere” class CPU alone for $140 so getting the CPU plus motherboard for $110 is an awesome deal. Westmere is one generation newer than Nehalem and it comes with some new security features and AES acceleration. The Motherboard includes a G55 graphics chipset built onto the CPU die which is roughly 2-3 times faster than the older G45 Intel Graphics chipset. This would be perfect for an HTPC setup.
Also, this 11.6″ Core Solo based small notebook with 6-cell battery at $350 is a great Netbook killer.
Fry’s (San Francisco Bay Area stores only) has a great deal on an Intel i3 540 CPU and Gigabyte H55m-S2H motherboard for just the cost of the CPU. That basically saves the cost of a $90 motherboard (price at Newegg) and the cost of shipping if you live near a Fry’s. Here’s a positive review of the Gigabyte H55m-S2H motherboard in case you’re wondering if the motherboard is worthwhile.
This is a low-power Intel “Clarkdale” system with a 32nm dual-core Westmere-class CPU and a 45nm Intel G55-class graphics processor built into the CPU package. Power consumption is very low for idle and peak and Clarkdales are known for extreme overclocking potential. The motherboard has DVI and HDMI so it is a great HTPC candidate.
The Clarkdale graphics is has full dual-stream 1080P offload and probably more than double the 3D performance of the older G45 based graphics from Intel. That’s still not good graphics performance by any stretch of the imagination, but decent for an integrated part for casual gaming like World of Warcraft type games but not too good 3D shooters.
At my job, we have a second, hardware-identical chassis backup server sitting in our rack. In case of failure on our main server, we can just move all of the disks from the main server to the backup machine, fire it up, and solve the failure on the main machine at our leisure. A few months ago, we decided to purchase a second CPU for the backup machine. We have enough VMs running at this point to justify it. So, we ordered a secondary CPU with the exact same part number, put it in, and thought nothing of it. When we had the chance to test it, we just could not get it to work. Windows would not come up, it would just reboot halfway through startup. A dummy install of Linux seemed to work. But no matter what we did, Windows just wouldn’t come up.
We spent dozens of hours on this issue. One thing we noticed was that it didn’t matter which CPU was removed, or which socket was populated, the system worked with only one CPU in it. A motherboard swap didn’t do the trick. At one point, we noticed that one of the CPUs did not seem to show up in the BIOS’ temperature readings. Eventually, someone discovered that the CPUs had slightly different revision numbers on them. More research showed that the different revisions to “identical” CPUs might have well have been different chips entirely; there were huge differences in their capabilities and feature sets. No wonder Windows was taking a hike! It was detecting features on one CPU and trying to use them on the other. My suspicion is that Linux wasn’t trying to use those features.
So, if you buy a CPU for a system some time after you bought the others CPU(s), check the revision carefully. There is a special manufacturer’s order number to differentiate them. Check what your original CPUs have, and make sure that the place yo order from has identical parts in stock. It took us many, many tries to find a vendor willing and able to actually verify this information before shipping, but now we have two perfectly identical CPUs in the system, and it works like a charm.
AMD launched its 45nm Shanghai processors today for the server market ahead of Intel’s Nehalem processor launch. AMD lists a series of benchmarks here but they omit many of the better results from Intel. To get the full official results, here are the benchmarks based on the best scores available from AMD and Intel as of November 13th 2008.
It appears that AMD has made some important gains and it has taken the lead in SPECjbb, SPECweb, expanded its lead in SPECfp, and Virtualization (due to Nested Paging). AMD still trails in SPECint, SPECpower, and SAP but this is an important victory for AMD which has been plagued with delays in 2007 and 2008 with its previous Barcelona processor. Shanghai is a major milestone for AMD because it required a shift to a whole new 45nm immersion lithography process and it shows that AMD can launch a product on time and put Barcelona behind them.
However, Intel is expected to launch its Nehalem-EP processors for the mainstream two-socket server market within a few months and Nehalem is expected to be a huge jump in performance on the server platform. While the new triple-channel DDR3 unbuffered memory subsystem doesn’t do too much for the Intel i7 Nehalem desktop processors, it is expected to unleash a huge increase in performance for Intel Nehalem.
Intel’s Penryn class of processors launched last year will be the last generation of Intel processors to use the Front Side Bus (FSB) and a single North Bridge memory controller located on the motherboard. The FSB and single North Bridge memory controller wasn’t a problem for most of Intel’s product line but it significantly hampered Penryn’s performance in the two-socket market at higher clock speeds. But this wasn’t a problem since AMD had trouble launching its Barcelona products early enough and at high enough clock speeds to threaten Penryn on the high end, and only now are the newest AMD Shanghai processors competing head to head with Intel Penryn. Some people wondered why Intel stuck with the FSB architecture for so long, but the timing seems to have been appropriate given the outcome in the last two years.
Intel’s Nehalem will be too fast to run on FSB architecture which is why Intel designed it with QuickPath. Nehalem will have no such memory bandwidth limitations as it transitions to QuickPath architecture with a memory controller on each microprocessor. Nehalem also catches up with AMD on nested paging support for improved virtualization, but the late timing doesn’t seem to have hurt Intel given the fact that virtualization hypervisors are only now beginning to support nested paging. So the race is on to see how quickly AMD can ramp up Shanghai processors and how long it takes Intel to launch Nehalem.
Performance benchmarks are the equivalent of the Indianapolis 500 for the computer industry, if not more important. Rarely are the top performance numbers attainable in the real world, but winning is a critical symbolic victory in the war of perception. The winner of the benchmarks will be perceived to have a technological lead across their entire line of products while the loser is perceived to be inferior across their entire line of products.
Benchmarks are like races and the winner is always determined by the best performance. The only fair way to conduct benchmarks is to have each player put forth their best player to achieve the highest scores possible within a common set of rules and parameters. By this measure, AMD’s recent submission of SPECpower energy efficiency benchmarks on behalf of Intel servers which portray Intel in a sub-optimal light while ignoring superior scores for Intel is highly inappropriate.
AMD says that an AMD server gets a score of 731 while the Intel server that AMD submitted on behalf of Intel gets a score of 561. AMD claims this is a legitimate comparison because the systems share many commonalities and they defend their behavior by saying:
“if we were trying to show worst case scenario we would have turned off Power Management on the Intel server”
But the fact that AMD could have turned in even worse performance numbers for the Intel system is totally irrelevant. If anything, submitting plausible performance numbers on the Intel system is far more insidious because it is a more effective deceit.
The obvious problem with AMD’s explanation is that the Intel system submitted by AMD shows a poorly optimized hardware configuration for Intel in terms of energy efficiency. The system uses Fully Buffered memory DIMMs which are notoriously inefficient for the Intel system when the most efficient Intel systems use unbuffered DIMMs. Intel launched the San Clemente 5100 series chipset exactly one year ago which uses the same unbuffered memory as AMD. While it’s true that Intel initially went with Fully Buffered memory two years ago, it was obviously a mistake given the power consumption and Intel quickly fixed that mistake last year. With Nehalem-EP two-socket servers coming at the end of this year, Intel will only use ECC unregistered or registered DIMMs and they will completely shun FBDIMMs. The bottom line is that anyone looking to build an efficient Intel-based server today should use unbuffered memory and AMD conveniently forgot to do that.
The other possible problem in AMD’s explanation is that they used the “same JVM command line options” for the Server Side Java (SSJ) benchmark. While this sounds like a fair comparison, it’s common for different CPUs from different vendors to require different command line options to achieve optimum results. This may explain why the AMD-submitted Intel server was 5% slower than comparable Intel systems submitted by other vendors. Combined with the suboptimal Intel hardware and possibly suboptimal software configuration, we can see the likely reason why the AMD-submitted Intel system did so poorly.
To get an accurate picture of what’s really going on, we need to look at the best possible SPECpower scores for AMD and Intel to determine who the actual winner is. The table below compares the official SPECpower_ssj2008 energy efficiency benchmarks of the top dual-processor servers from Intel and AMD.
|System (SPECpower_ssj2008)||Vendor||DIMMs||Peak SSJ||Peak watt||Score|
|Intel L5430 2.66_GHz||PowerLeader||2||285,970||161||1,135|
|Intel L5420 2.5_GHz||SuperMicro||2||279,209||174||990|
|Intel L5420 2.5_GHz||HP||2||282,281||189||930|
|Intel L5430 2.66_GHz||Fujitsu Siemens||4||293,162||220||827|
|AMD 2384 2.7_GHz||AMD||4||338,577||264||860|
|AMD 2384 2.7_GHz||AMD||4||335,116||264||827|
Because the SPECpower rules don’t really specify the minimal number of memory DIMMs that a server should have, AMD should have submitted servers with 2 DIMMs like everyone else to get the best scores. Instead, AMD submitted servers with 4 memory DIMMs instead of 2 which gives them a 3.58 idle watts to 8.42 peaks watt handicap. However, AMD neutralized that handicap using the newest Western Digital GreenPower 3.5″ hard drive which saves 4 idle watts to 6 active watts compared to the hard drives used by the other systems.
But for the sake of comparison, I estimated the scores of the 3 Intel systems that used only 2 DIMMs and added 3.58 watts in idle and gradually increased up to 8.42 watts at peak Server Side Java (SSJ) loads to simulate power consumption for a 4-DIMM server. But when these servers get upgraded to 4 DIMMs, they have higher memory performance which translates to higher overall performance. Based on the peak SSJ scores in the table above, I estimated a 2.5% boost in peak SSJ performance which slightly counteracts the negative effects of the higher power consumption in terms of performance per unit energy. Based on this estimate (which should NOT be taken as official SPECpower_ssj2008 scores), I calculated that the top three systems in the table above would have achieved a score of 1006, 889, and 836 which is still higher than the AMD servers. But if those top three Intel servers had used the same energy efficient hard drives used in the AMD servers, the score reverts back to something similar to the unadjusted numbers.
UPDATE 11/13/2008 – The AMD results were actually not Barcelona, but AMD Shanghai results. I had thought they were Barcelona but I didn’t realize that I was looking at yet-to-be-launched Shanghai performance numbers.
In conclusion, AMD has made huge strides to nearly close the SPECpower gap, but they’re behaving inappropriately by comparing their products to suboptimal benchmarks that they themselves submitted. That’s unfortunate because this new controversy has overshadowed the huge progress made by AMD. Had AMD launched these mid 2 GHz Barcelona processors on time a year earlier, they would have been extremely competitive all through 2008 but it was not to be and they suffered for it.
AMD made some huge clock-for-clock core-for-core performance gains with their quad-core Barcelona chips in Server Side Java (SSJ) performance compared to their older dual-core Opteron chips. Even when I factor out the clock speed difference, a Barcelona quad-core 2.7 is still 3.14 times faster than an Opteron dual-core 2.4 GHz system in Server Side Java performance. These gains have allowed AMD to become very competitive against Intel’s aging Penryn chips although AMD cannot claim the title. It’s also interesting to note that AMD Barcelona also made huge improvements in web server performance and it is beating Intel Penryn servers on dual-socket SPECweb_2005. However, Intel Penryn class CPUs still win by a large margin in the single-processor server market.
The reason for this disparity between single- and dual-socket servers is that Intel’s memory architecture is constraining their dual-socket performance, but that limitation will soon disappear with Intel’s Nehalem Microarchitecture server CPUs which should launch by the end of this year. Intel’s Nehalem CPUs have completely closed the memory performance gap with QuickPath architecture and they’ve even managed to leapfrog AMD’s memory architecture with 50% more memory channels and higher performance DDR3 memory. Coupled with the improvements in the Nehalem execution engine, there is little doubt that Intel will be regaining a comfortable lead in the Server and Desktop markets.
AMD will narrow that gap with their Shanghai processors if they can launch on time but few analysts predict Shanghai will come close to beating Nehalem. Whether or not AMD can launch Shanghai on time or get close enough to Intel Nehalem to be reasonably competitive remains to be seen.
The Intel Atom on 945 chipset motherboards have arrived (thanks to my friend Max for the tip) and they’re quite affordable! $77 with shipping in stock here. This should make an awesome embedded device or home server since the power consumption is so incredibly low.
This is a 4W TDP 45nm CPU that averages under a watt idle. The only thing that disappoints me is the big honking heat sink and fan on the GPU/chipset while the CPU takes a tiny bit of space with a tiny heat sink and no fan. The chipset uses an older manufacturing process which is why it’s so relatively big compared to the tiny 45nm CPU. However, I’m pretty sure that you could remove that fan from the heat sink for the GPU/chipset especially if you don’t plan on using the GPU with 3D gaming.
AMD just released a “new” low-power 1.8 GHz quad-core model X4 9100e and dubbed it “the world’s first 65W desktop processor”. But this claim seem to be dubious when you consider what AMD had to do to get to that power level and what they’re up against.
- AMD uses very low clock speeds to achieve these power numbers. The 9100e is clocked exceptionally low for a modern day processor which gives it horrible single- or dual-thread performance compared to a less expensive dual-core 2.5 GHz or 2.66 GHz Intel processor. For the desktop market, it’s hard to see where a low-clocked quad-core makes sense.
- The other problem is that this is a very qualified and questionable victory in actual power performance. Anytime you drop the clock speed in a given processor, the power consumption will drop. Intel for example offers their 50W 2.5 GHz 45nm Penryn-class chip called “Harpertown” in the lucrative server space but chooses not to offer this chip in the desktop space.
- Even Intel’s 2.66 GHz 45nm quad-core “Yorkfield” model Q9450 would come very close to the AMD 1.8 GHz 9100e quad-core in power consumption. This is fairly certain since TechReport, LostCircuits, and HotHardware all showed how close the QX9650 3.0 GHz 45nm came to 65W TDP dual-core CPU performance which is nothing short of amazing. The only anomaly is this recent set of power benchmarks released by TomsHardware which strangely claims that a 45nm Q9450 2.66 consumes more power than an Intel Q6600 2.4 GHz 65nm processor which goes against every 45nm power benchmark I’ve ever seen. As soon as the AMD 9100e quad-core becomes available, I’ll try test it against an Intel 45nm quad-core to verify.
- But the biggest problem with this 9100e is that it is based on the buggy B2 stepping which has that infamous TLB (Translation Lookaside Buffer) bug that cripples performance when it’s patched. It would seem as if AMD is taking their leftover stockpiles of B2 stepping chips and repackaging them as energy efficient CPUs. So we’re talking about a low-clocked low-performance chip to begin with that has to take an additional TLB patch performance hit.
So why did AMD go with a buggy chip? It would seem that getting rid of excess buggy B2 stepping CPUs isn’t the only motivation. I got this set of stunning data below from someone I know who we’ll call “Jack” for the time being. Jack has a PhD in Physics and he is also a huge CPU technology enthusiast. Jack sent me the following power reading chart comparing two AMD X4 quad-core CPUs at identical voltage and clock speeds. The AMD 9850BE is a B3 stepping CPU and the AMD 9600BE is a B2 stepping CPU.
The reason the 9850BE voltage and clock was dropped down to the level of the 9600BE was to provide an apples-to-apples comparison between the B2 and B3 process. As you can see, there is still a 16 to 20 watt delta between idle and peak power consumption between these two chips. What this seems to suggest is that the B3 stepping consumes more power than the B2 process as a tradeoff for higher clock speeds. In fact, this may explain why AMD silently increased their TDP ratings in their quad-core Opteron CPUs which use the same die as the X4 desktop quad-core processors. This explains why AMD would use the B2 stepping despite its TLB bug to produce their first 65W TDP desktop quad-core desktop processor where the market isn’t as discriminating as the server market against processor bugs. But why does AMD’s B3 stepping consume more power than the B2 stepping? Jack’s theory is that the gates are thicker in B2 which avoids leakage. His theory is based on the following observation: Where Jack has the time to dig this kind of stuff up is beyond me, but it sounds like a reasonable theory that is backed up with hard numbers. Here are some of Jack’s disclosures for his test methodology Conclusion on the AMD X4 9100e
Jack: In Nov. Semi International reverse engineered a Barcelona and quoted a gate thickness it was 15% HIGHER than what AMD disclosed at the IEDM in 2005.”
This explains why AMD would use the B2 stepping despite its TLB bug to produce their first 65W TDP desktop quad-core desktop processor where the market isn’t as discriminating as the server market against processor bugs. But why does AMD’s B3 stepping consume more power than the B2 stepping? Jack’s theory is that the gates are thicker in B2 which avoids leakage. His theory is based on the following observation:
Where Jack has the time to dig this kind of stuff up is beyond me, but it sounds like a reasonable theory that is backed up with hard numbers.
Here are some of Jack’s disclosures for his test methodology
Conclusion on the AMD X4 9100e