Friday, June 27, 2008

Intel Core 2 Duo E7200: The new budget king


Mid-way through 2006 we saw the launch of Intel’s most impressive processor series in recent times, the Core 2 Duo. At the time the flagship model was the E6700, featuring a 2.66GHz clock frequency, 4MB L2 cache, 1066MHz FSB, and a 65nm design process, all of which was topped off with a price tag of $530. Today we are looking at a processor with similar specifications with one major difference, that being the estimated retail price of just $133.

The new Core 2 Duo E7200 is the first member of the “Wolfdale-3M” family. You may remember that not so long ago we looked at the new Wolfdale family of 45nm processors, which consists of the E8200, E8300, E8400 and E8500 processor.These still remain today as the fastest Core 2 Duo processors, offering superior performance and efficiency not to mention stellar overclocking abilities.

The Wolfdale-3M family is, as you may have guessed, a cut-down version, and as the name alludes to, these processors feature a smaller 3MB L2 cache and a 1066MHz FSB. With the Core 2 Duo E7200, Intel is offering a 45nm processor that is considerably more affordable than the cheapest Wolfdale (~$180), and even undercuts other budget oriented models such as the E6550 that roughly costs $160, while the E4700 is also more expensive at around $140.

The Core 2 Duo E7200 operates at 2.53GHz, making it slightly slower than the E4700, though it will make up for this with a greater FSB speed and extra L2 cache. More importantly as a 45nm chip it inherits the added efficiency the original Wolfdale CPUs are known for.
The Core 2 Duo E7200 is already available at $133, so there is no question regarding the legitimacy of its suggested retail price as there was with the E8400.

Then the only big question left to answer now is how does this 2.53GHz Core 2 Duo chip with 3MB of L2 cache perform? This and more will be answered very shortly, as we will throw the usual batch of tests at the E7200 and compare it to a range of Intel and AMD processors.

AMD previews upcoming "Shrike" mobile platform

AMD is taking mobile computing seriously. Earlier this month the company introdused its puma note book platform,bringing together the new Turion ultra-mobile processor, AMD 7-series chipset, and the Radeon HD 3000 range of integrated and discrete graphics solutions. While products based on the platform are just beginning to surface, already we are hearing details about its successor: the Shrike platform.

Due out in the second half of 2009, Shrike will be the first AMD “Fusion” product, and will incorporate the CPU, memory controller, and GPU into a single piece of silicon – though the use of discrete graphics cards will also be supported through PCIe. According to a presentation side, AMD plans to build the first Fusion products around dual-core processors and promises substantial CPU and GPU performance improvements as well as longer battery life.

Interestingly, Shrike will also help AMD compete in the ultra-portable notebook market – from which it has been mostly absent – thanks to the integration of CPU and GPU which will translates to smaller form factors. In fact, the company is pitching the platform as an alternative for sub-inch notebooks.

How an AMD dual-die quad-core could have worked

The AMD64 direct connect architecture was designed to support up to eight sockets without any intervening core logic. That means single-core, dual-core and quad-core devices could scale out to a maximum of 8,16 and 32 cores respectively.

Some dual-socket Opteron motherboards connected only one processor to main memory. That primary CPU used one of its two HyperTransport links (HT) to connect to board peripherals. The other HT link connected to the secondary device. Because this chip didn’t have its own dedicated memory, access to the primary CPU’s RAM was via the HT bus. So if only one Opteron chip was going to be used it had to be installed in the primary socket.

Probably the best known implementation of this was the I will ZMAXd2. This company had managed to shoehorn a dual-socket Opteron motherboard into the space of a small form factor (SFF) sized case. Only two DIMM slots could be used as space was so very tight.

If the number of dual-core dies has been doubled, this increases the number of cores per package from two to four. So one package is a dual-die quad-core. Also, one package is now directly connected to memory. It should be evident that the individual packages are consolidated versions of the diagram above. If either of the dual-die packages replaced the pair above, the OS wouldn’t see anything that is different.

But this dual-die design now has four HT links per package, which can’t all be connected through the socket, as that interface can only support a maximum of three links. This problem is solved by linking the dual-core dies with a HT link through the package itself - thus bypassing the socket. This still leaves two HT links, as per design, to connect to the outside world. Four-socket Opteron setups and above would have one additional HT link per package.

But what about power and ground connections and the other miscellaneous signal interfaces? In Socket 940’s case, if the power and ground connections couldn’t all be shared, and multiplexing couldn’t do the same for the other miscellaneous interfaces, additional socket pins would be required.

A socket candidate
Socket F (1207) would have been a likely socket to use, as that has 267 more pins than Socket 940. This should have solved any pin count shortfall.

I said that AMD should have launched a dual-die quad-core in December 2006, when the company moved to 65 nanometre manufacturing technology. So the August 2006 Socket F launch would have paved the way for dual-die product.

But even if the company had to design brand new sockets for both DDR and DDR2 implementations, I believe that the availability window, when one factors in the quad-core delays, would have more than paid for the investment.

Hector speaks his mind
Someone who uses Hector as an alias (puking and suing Hector that is), wrote something in jest, which unfortunately for AMD makes for painful reading. He said:

”Mario, AMD is a customer centric company. We always consult our customers who tell us what to do. In fact, I just called Dell to ask them what I should have for lunch today.”

“All of our customers told us that they prefer native quad cores to dual die dual core quad cores. They also told us that speed isn't important - 2 GHz will be just fine. In fact, 1.9 GHz will be peachy.”

“Our customers also said that being late to market with quad cores was OK - so we are late, slow and bogged down - because we are customer centric - we listen to our customers.”

“At least the few customers that we have left.”

Don’t forget the backstop
It seems evident to me that a three-pronged strategy - packaged CPU insert card, dual-die and native architectures - would have given AMD all the multi-core reach that it could handle.

AMD would have been first to market with every multi-core launch - dual-core, quad-core, octa-core, hexadeca-core and everything in between. The company’s product flexibility would have been very good as well. So if there was a delay of a particular design - quad-core comes to mind - the company would have had a dual-die version as cover.

The AMD quarterly financial report ends with a cautionary statement, which discusses the risks and uncertainties of its forward looking statements. AMD says in part, that, “Risks include the possibility that Intel Corporation’s pricing, marketing and rebating programs, product bundling, standard setting, new product introductions or other activities targeting the company’s business will prevent attainment of the company’s current plans”.

AMD was right in having a native quad-core strategy. But such a complex design required a backstop that the company didn’t have. AMD had a great opportunity to take the multi-core market by the horns, but its tunnel vision strategy didn’t allow that happen.

Since the chip maker got badly burned from its quad-core delays let’s just hope that the company has learned its backstop lesson.

Intel's Atom already designed into future iPhone, say reports

LONDON — A version of Intel's Atom processor is already designed into the generation of the Apple iPhone set to follow on from the imminent 3G iPhone, according to Eric Savitz, the west coast editor for Barron's Magazine, who cited JoAnne Feeney, an analyst with FTN Midwest Securities Corp. (Cleeveland Ohio), as his source.

Savitz offered up the snippet in his blog saying that JoAnne Feeney had written in a research note that an Atom-powered iPhone would arrive in 2009 or 2010, most likely based on a processor made using 32-nm manufacturing process technology.

Savitz quoted Feeney as saying the Atom development program is well ahead of schedule," and that this could allow Intel to demo the 32-nm Atom processor at the Intel Developer Forum in San Francisco in August.

The 3G iPhone will hit the streets on July 11 with a bill of materials (BOM) and assembly cost for the 8-Gbyte version of $173, The main applications processor is believed to be made by Samsung using an ARM processor core, along with baseband, RF chips from Infineon, as was the case for the original iPhone.

Observers have stated that ARM-based processors are more power efficient than those offered by Intel — on the same process manufacturing node. But if Intel can get to next process node sufficiently far ahead of competitors such as Samsung it may be able to persuade Apple to switch architectures. This would be based on the argument that the power consumption from the available devices is roughly equivalent and Intel is prepared to offer a low price to get the win. According to Savitz quoting Feeney Intel's arguments have already succeeded.

AMD should have beaten Intel to every multi-core launch

An AMD dual-die quad-core story from last year generated some letters expressing concern about how that device would work, which made an explanation necessary.

But having done that, it made me conclude that there was an easier, first step approach that should have been done before a dual-die design - the packaged CPU insert card.

Also, now that AMD has got over its quad-core delivery problems and is now shipping that device in volume made it opportune to look back at what the company might have done differently.

I said in that piece that AMD should have launched a dual-die quad-core in December 2006, when the company moved to 65 nanometre manufacturing technology. But I now believe that its dual-core, quad-core and octa-core introductions should have happened much, much earlier.

Market a metric - not the underlying technology
I was a believer in the AMD64 native processor strategy. But AMD was and still is so wedded to it. That strategy would have been fine if the product had arrived and performed as advertised. But it didn’t in the case of Barcelona, and as a consequence the company continues to lick its own inflicted wounds.

With its Netburst processor, Intel had great sales success because the company exploited the frequency advantage that was inherent in the design. Even though AMD had comparable offerings that operated at lower frequency, the chip giant used its marketing clout to convince the market that frequency really was the yardstick.

But the 2001 AMD Athlon XP launch re-introduced the model number nomenclature, which eventually convinced Intel that frequency really wasn’t representative of performance. So as a marketing tool for mainstream CPUs, frequency went the way of the Dodo.

Because the AMD64 architecture easily lends itself to packaged CPU card solutions, as Intel had exploited Netburst ’s frequency advantage, shouldn’t AMD have exploited the core advantage that a packaged card design should have delivered?


Putting the core into AMD64
Putting discrete chips onto an insert card and marketing it as a dual-core, quad-core, octa-core or a hexadeca-core device (16 cores) would have run in the face of the company’s head-in-the-sand native processor strategy. But if the packaged architecture had delivered, the underpinning design really wouldn’t have mattered.

I’m confident that Intel would have said that it didn’t. The chip giant has been selling packaged dual-die quad-cores (two dual-core dies located on the same substrate) in volume for what seems like an age.

From the beginning, AMD should have offered two versions of its top-of-the-line Athlon 64 FX enthusiast CPU - single-socket (FX) and dual-socket capable (FX2). Later on, if the market was ready for more cores AMD could have introduced the FX4 part, which would have supported four sockets.

AMD ’s dual and four socket Opteron chips would have been differentiated from their enthusiast cousins as later models wouldn’t have supported registered memory.

HP, Sun and others may have supported an Opteron multi-socketed insert card. But if the tier one OEMs hadn’t played ball, AMD could have released a reference design for the channel to exploit.

Time to market advantage
A packaged CPU insert card is generally avoided because of cost. But as these multi-core modules could have arrived much earlier than the competition, the additional cost shouldn’t have been a problem.

With standard multi-socket AMD64 motherboard support inherent in the design, why should AMD have offered multi-socket card support as well? Because a packaged CPU insert card offers greater marketability and design flexibility.

The packaged CPU insert card would have had no performance advantage over the regular motherboard types. So this would have been purely a marketing play to be first to market with multi-core technology.

But of course, if AMD had wanted to it could have differentiated the platform in a number of ways. To focus attention on the module architecture, the chip maker could have launched its latest and greatest in a packaged version first. The company could also have increased the processor’s level two cache. If it wasn’t too complicated it may have been possible to introduce an off-chip level three cache as well.

Using a dual socket design, if a packaged CPU architecture had been part of AMD’s platform strategy, dual-core, quad-core and octa-core packages could have been brought to market when the single-core, dual-core and quad-core devices were launched.

Using a quad-socket design, the quad-core part could have launched at the same time as the single-core introduction. The octa-core part could have arrived with the dual-core introduction, and the hexadeca-core package could have debuted with the quad-core launch.

Now I don’t know if the four socket design would have proved a commercial success. So the reason for its inclusion here is to show what was technically doable and to demonstrate the early launch time of these parts.

Intel entered the PC market with the first dual-core device in April 2005. AMD could have been selling a packaged two socket version 20 months earlier - at the September 2003 Athlon 64 launch. The quad-core device, using four sockets, could also have launched at the same time, which would have been three years and three months before the competition - November 2006 Intel quad-core launch. The company’s octa-core and hexadeca-core introductions would have again been first to market.

With such a huge time to market advantage, conceivably, from a mind share standpoint, Intel should have been blown out of the water, and AMD64 would still have been the talk of the town. So the $64,000 question: why didn’t AMD follow such a strategy?

Now people will say that these multi-core introductions may have been far too early for the market to exploit. So what is also being illustrated here is the availability time, not necessarily the launch time, that AMD could have had.

Looking after the enthusiast
As well as off-the-shelf packaged CPU card solutions, AMD should have offered bare insert cards that the enthusiast would have populated.

In the dual-socket case, if this card had been populated with a single-core processor, it should have been possible to first install another single-core device - to make the package dual-core - and later a dual-core part to enable a triple-core system.

It should be noted that the HyperTransport (HT) interface link that AMD processors use comes in two flavors. Regular HT and cHT. Coherent HT technology (cHT) is required for multiprocessor support. Single processors that don’t support multiprocessing only ship with plain vanilla HT. So in the dual-socket case above, cHT devices would be required.

The BIOS would report the number of cores. So if the module only had a single-core device that is what the BIOS would report. For two single-core devices the package would report a dual-core device. For a single and dual-core device, a triple-core device would be reported and so forth.

Buyers that had invested in, for example, a socket 940 single-core processor shouldn’t have had to discard it when the dual-core version entered the market.

AMD should have engineered and validated the parts to be compatible. If the dual-core part was higher in both frequency and HT bus speed than the single core device that it would partner, the dual-core device should have automatically defaulted to the frequency and HT speed of the slower device.

How many enthusiasts who had this triple-core system would have forked out the money to upgrade to an Intel quad-core environment, especially when they still had the option to upgrade their single-core device to dual-core and enable a quad-core system at much less cost than a brand new platform?

Giving real socket and CPU longevity, just like the company did with the Socket 462 platform, would have given AMD customers a good reason to not jump ship to Intel.

The modular, packaged approach
The packaged CPU insert card is nothing new. HP used this design for its DL585 four-way server. Sun used it for its eight-way X4600 counterpart. ASRock sold a motherboard called the K8 Upgrade 760GX. This Socket 754 based design allowed for Socket 939 upgradability by simply inserting an upgrade card and moving over some jumpers.

AMD should have developed two modular designs. One socket based and the other integrated using soldered down devices. Both versions would have had module installed DIMM slots.

Because an insert card offers generous real estate, a dual socket solution would have been possible. As an example, the length of the card above could be doubled. Alternatively, a dual-socket solution could have been developed which used two single-socket cards back to back, which wouldn’t have increased length but thickness would have doubled.

Either design would have allowed the company to offer a dual-core device using single-core devices, a quad-core device using dual-core processors, and an octa-core device using quad-core chips. As discussed earlier, triple-core offerings should have been a possibility as well.

It should have been possible to link two full length cards back to back, which would have doubled the number of devices. So using single, dual or quad-core devices, the four-socket module would have had a respective tally of four, eight or sixteen cores.


Making it all fit
A full length card would have been pretty long and heavy, and two of these back to back in terms of space (not use) would have taken up at least four PCI card positions. So a form factor rethink would have been in order.

The full length card would have had two connector positions per PCB to spread the module weight. Two of these cards back to back would have had four connector interfaces, which would not only provide power, but would be the HyperTransport interface as well.

If the connectivity of a four socket design would have necessitated the use of a four layer plus motherboard, the card to card CPU connections could have been linked directly.

Because of length and thickness requirements, AMD could have gone down the road that Apple took with its Power Mac G5. This would have provided an isolated cooling zone for the CPU module, which would have been cooled by fans. AMD could have given that design the full Apple treatment by liquid cooling the CPUs as well.

AMD FireStream speeds up GPU-based processing

AMD is updating its FireStream line of processor cards with a new model: the FireStream 9250.
The FireStream range uses the same cores as the ATI Radeon consumer cards, but is intended for applications that use the GPU for mathematical calculations rather than graphical rendering. AMD claims that offloading certain operations onto the GPU can deliver a 2,000% speed increase over a mainstream CPU.

The major target markets are educational, medical and financial institutions that write their own applications, and the card is supplied with a full SDK.

The new card uses the same RV770 core found in the company's new Radeon HD
4000 range, and is architecturally very similar to the Radeon HD 4850 The major difference is the FireStream's 1GB of GDDR3, twice the HD 4850's allocation.

Otherwise, the 9250 shares the 4850's single-slot design, and promises the same sub-150W power consumption while delivering a teraflop of power with single precision calculations (falling to around 200 gigaflops for double-precision).

The FireStream 9250 even includes the 4850's 40 texture units - a feature which Patricia Harrell, Director of AMD Stream Computing, admitted at the launch event was "unlikely to be used" by most customers.

But one huge deviation from the ATI consumer range is the price. AMD's suggested retail price of $999 for the FireStream 9250 may be half the price of its predecessor, the RV670-based FireStream 9170, but it's still a huge premium for a card that is, on paper, almost identical to the Radeon HD 4850.

Part of that pays for a higher level of customer support, as you'd expect from a card that's intended for use with bespoke applications. But AMD assures us that FireStream cards are also manufactured to higher engineering standards than its Radeon series, enabling the cards to run "flat out, 24/7, with a three-year warranty."

AMD launches hotter dual-slotter

AMD has rolled out the Radeon HD 4870. The chip maker touted the GPU's ability to churn through more than a trillion floating-point calculations each second. It also heralded the part as the first to be connected to GDDR 5 memory.

The 4870 accompanies the 4850 launched last week - after some board makers, ahem, let details slip out early. Fabbed at 55nm, the GPU contains 956m transistors and operates on a PCI Express 2.0 bus.

That GDDR 5 memory - 512MB of it - connects over a 256-bit bus for a 3.6Gb/s bandwidth. The 4870's core is clocked at 750MHz. Unlike the 4850, the new GPU sits on a double-slot board and consumes up to 160W of system power.

That's 25W more than the old-gen Radeon HD 3870, AMD admitted, but it claimed that new part yields more than double the performance offered by its predecessor, when measured using 3DMark Vantage, at least. AMD claimed the two chips scored 1626 points and 3559 points in the benchmark, respectively.

So for an 18.5 per cent increase in power consumption, users get a 118.9 per cent gain in performance. Result: almost double the performance-per-Watt.

Expect versions of the board from all the usual board suppliers and priced at around $299. The 4850 retails for around $199.

Trio of Nehalem heads for Q4

Nehalem-based quad-core processors are due to be launched by Intel towards the end of Q4 according to the rumour mill next to the water cooler of Taiwanese motherboard makers.

The temporarily-nicknamed XE, P1 and MS3 target Chipzilla’s new LGA1366 socket, all have a TDP of 130W, 8MB L3 cache, will support simultaneous multi-threading (SMT) and boast core frequencies of 3.2GHz, 2.93GHz and 2.66GHz, respectively, if DodgyTimes is to be believed.

To back up its new processors, Intel will also purportedly let loose its X58 and ICH10 chipset combination around the same time, which its thought could provide a performance boost of between 15 and 30 per cent.

Intel’s X58 chipset would apparently sport the company’s innovative QuickPath Interconnect feature which allows processors to connect to another component or another chip on the motherboard.

PCI Express 8x slots would also be part and parcel of the new platform, along with support for AMD's Quad CrossFireX. Still no word on whether Chipzilla will be allowed to Nvidia's SLI technology though. ยต

Intel preps gaming-oriented chipset for 'Nehalem'

Intel looks set to follow the release of its 'Bloomfield' 45nm processors - all based on the 'Nehalem' architecture - with a gamer-oriented chipset, the X58.

The chipset's specs aren't known, but it's not hard to guess. The northbridge will be 'Tylersburg', the chip that links to the host processor over Intel's new HyperTransport-like QuickPath Interconnect (QPI) bus.

Tylserburg provides the system with a PCI Express 2.0 bus - memory is managed by the 1366-pin CPU's own, on-board DDR 3 memory controller. Expect X58-based boards to offer four x8 PCIe slots and support AMD's CrossFire X multi-GPU technology.

Intel is believed to be keen to support SLI too - whether it does so will hinge on the outcome of negotiations with Nvidia.

In turn, Tylersburg connects to the ICH10 southbridge, which handles the system I/O - all the customary HD audio, USB, Gigabit Ethernet and SATA ports are provided.

The X58 should ship in Q4, alongside Bloomfield, which is expected to debut in three version clocked from 2.66GHz up to 3.2GHz. All three CPUs contain 8MB of L3 cache shared across all four cores - each core has its own complement of L1 and L2 cache; 256KB of the latter - and support all the usual Intel extension technologies except TXT (Trusted eXecution Technology). They're all said to have a power and thermal envelope of 130W.

About AMD

Advanced Micro Devices, Inc. (abbreviated AMD; NYSE: AMD) is an American multinational semiconductor company based in Sunnyvale, California, that develops computer processors and related technologies for commercial and consumer markets. Its main products include microprocessors, motherboard chipsets, embedded processors and graphics processors for servers, workstations and personal computers, and processor technologies for handheld devices, digital television, and game consoles.

AMD is the second-largest global supplier of microprocessors based on the x86 architecture after Intel Corporation, and the third-largest supplier of graphics processing units. It also owns 21 percent of Spansion, a supplier of non-volatile flash memory. In 2007, AMD ranked eleventh among semiconductor manufacturers.


Advanced Micro Devices was founded on May 1, 1969, by a group of former executives from Fairchild Semiconductor, including Jerry Sanders, III, Ed Turney, John Carey, Sven Simonsen, Jack Gifford and three members from Gifford's team, Frank Botte, Jim Giles, and Larry Stenger. The company began as a producer of logic chips, then entered the RAM chip business in 1975. That same year, it introduced a reverse-engineered clone of the Intel 8080 microprocessor. During this period, AMD also designed and produced a series of bit-slice processor elements (Am2900, Am29116, Am293xx) which were used in various minicomputer designs.

During this time, AMD attempted to embrace the perceived shift towards RISC with their own AMD 29K processor, and they attempted to diversify into graphics and audio devices as well as EPROM memory. It had some success in the mid-80s with the AMD7910 and AMD7911 "World Chip" FSK modem, one of the first multistandard devices that covered both Bell and CCITT tones at up to 1200 baud half duplex or 300/300 full duplex. While the AMD 29K survived as an embedded processor and AMD spinoff Spansion continues to make industry leading flash memory, AMD was not as successful with its other endeavors. AMD decided to switch gears and concentrate solely on Intel-compatible microprocessors and flash memory. This put them in direct competition with Intel for x86 compatible processors and their flash memory secondary markets.

It has been reported in December 2006 that AMD along with its main rival in the graphics industry nVidia, received subpoenas from the Justice Department regarding possible antitrust violations in the graphics card industry, including the act of fixing prices.
AMD announced a merger with ATI Technologies on July 24, 2006. AMD paid $4.3 billion in cash and 58 million shares of its stock for a total of US$5.4 billion. The merger completed on October 25, 2006 and ATI is now part of AMD.

About INTEL

Intel Corporation (NASDAQ: INTC; SEHK: 4335) is the world's largest semiconductor company and the inventor of the x86 series of microprocessors, the processors found in most personal computers. Founded on July 18th, 1968 as Integrated Electronics Corporation and based in Santa Clara, California, USA, Intel also makes motherboard chipsets, network cards and ICs, flash memory, graphic chips, embedded processors, and other devices related to communications and computing. Founded by semiconductor pioneers Robert Noyce and Gordon Moore, and widely associated with the executive leadership and vision of Andrew Grove, Intel combines advanced chip design capability with a leading-edge manufacturing capability. Originally known primarily to engineers and technologists, Intel's successful "Intel Inside" advertising campaign of the 1990s made it and its Pentium processor household names.

Intel was an early developer of SRAM and DRAM memory chips, and this represented the majority of its business until the early 1990s While Intel created the first commercial microprocessor chip in 1971, it was not until the creation of the personal computer (PC) that this became their primary business. During the 1990s, Intel invested heavily in new microprocessor designs and in fostering the rapid growth of the PC industry. During this period Intel became the dominant supplier of microprocessors for PCs, and was known for aggressive and sometimes controversial tactics in defense of its market position, as well as a struggle with Microsoft for control over the direction of the PC industry. The 2007 rankings of the world's 100 most powerful brands published by Millward Brown Optimor showed the company's brand value falling 10 places – from number 15 to number 25.