EMMA HS2 Semester 1 Outline



Computer Hardware Outline - Spring Week #1

Hardware - CPU – Central Processing Unit

CPU Models

Brands – Intel, AMD

Processor Types – Desktop, Server, Mobile

Series (Desktop)

AMD - Sempron, Athlon 64, Athlon 64 FX, Athlon 64 X2-X3-X4, Phenom I-II

Intel - Celeron, Pentium 4, Core 2 Duo, Core 2 Quad, Core i3, Core i5, Core i7

CPU Socket Type



AMD -754, 939, 940, AM2, AM2+, AM3

Intel -478, 604, 771, 775, 1156, 1366

Technical Specifications

Single-Core, Dual-Core, Quad-Core, Hexa-Core

Operating Frequency – Almost Meaningless Now – Must review CPU Charts for Performance

Cache -L1 primary, smallest, fastest memory on PC, built directly into the CPU itself

-L2 secondary, larger, slower, external to CPU unless included inside CPU

-L3 exists when L2 is included in CPU, slowest Cache, external

Manufacturing Tech – Size and spacing of the processor's transistors measured in nanometers

CPU Voltage

HyperTransport & QuickPath Interconnect – Replaces FSB, QPI uses GigaTransfers/Second

HyperThreading



Virtualization Technology - Videos





List of Intel microprocessors from Wikipedia



CPU Charts



pricing and specification information

Compare price to performance

Online documents – CPU Cache & Cache L1, L2, L3

Processor Glossary

CPU Virtualization

HyperTransport & HyperThreading

QuickPath Interconnect

i3 vs i5 vs i7 Microprocessors

Homework Online- Newegg Wish List – CPU Selection

CPU Online Quiz

CPU Cache

Level 1 (Primary) Cache

Level 1 or primary cache is the fastest memory on the PC. It is in fact, built directly into the processor itself. This cache is very small, generally from 8 KB to 64 KB, but it is extremely fast; it runs at the same speed as the processor. If the processor requests information and can find it in the level 1 cache, that is the best case, because the information is there immediately and the system does not have to wait.

Note: Level 1 cache is also sometimes called "internal" cache since it resides within the processor.

Level 2 (Secondary) Cache

The level 2 cache is a secondary cache to the level 1 cache, and is larger and slightly slower. It is used to catch recent accesses that are not caught by the level 1 cache, and is usually 64 KB to 2 MB in size. Level 2 cache is usually found either on the motherboard or a daughterboard that inserts into the motherboard. Pentium Pro processors actually have the level 2 cache in the same package as the processor itself (though it isn't in the same circuit where the processor and level 1 cache are) which means it runs much faster than level 2 cache that is separate and resides on the motherboard. Pentium II processors are in the middle; their cache runs at half the speed of the CPU.

Note: Level 2 cache is also sometimes called "external" cache since it resides outside the processor. (Even on Pentium Pros... it is on a separate chip in the same package as the processor.)

Level 3 cache (L3 cache)

Some microprocessor manufacturers now offer central processing units (CPUs) with both level 1 (L1) and level 2 (L2) cache memory, located on the surface of the chip or within its single-edge cartridge. When this is the case, the cache memory that resides outside the processor and on the motherboard (which is referred to as L2 cache in some cases) is called level 3 (L3) cache.

Disk Cache

A disk cache is a portion of system memory used to cache reads and writes to the hard disk. In some ways this is the most important type of cache on the PC, because the greatest differential in speed between the layers mentioned here is between the system RAM and the hard disk. While the system RAM is slightly slower than the level 1 or level 2 cache, the hard disk is much slower than the system RAM.

Unlike the level 1 and level 2 cache memory, which are entirely devoted to caching, system RAM is used partially for caching but of course for other purposes as well. Disk caches are usually implemented using software (like DOS's SmartDrive).

Cache Levels 1,2, and 3 by Patrick Schmid and Achim Roos October 6, 2009

Every modern processor comes with a dedicated cache that holds processor instructions and data meant for almost immediate use. This is referred to as the first level cache, or L1, and it first appeared on the 486DX processor. Recently, AMD Processors standardized on 64KB of L1 per core while Intel processors use 32KB of dedicated data and instruction L1 cache.The first level caches from Intel were introduced on the 486DX and are still an integral part of microprocessors today.

The second level cache (L2) has been available on all processors since the Pentium III, although the first on-chip implementation arrived with the Pentium Pro (not on die, though). Today’s processors offer up to 6MB of L2 cache on-die. This is the amount you’ll find being shared between the two cores on Intel’s Core 2 Duo, for example. Typical L2 cache configurations usually offer 512KB or 1MB cache per core. Processors with less L2 cache are often found in lower-end products. Here is an overview on early L2 cache configurations:

Pentium Pro had L2 cache on the processor. The following Pentium III and Athlon generation implemented L2 cache through surface-mounted SRAM chips common at that time (1998, 1999).

The introduction of 180nm manufacturing processes allowed manufacturers to finally integrate L2 caches within the processor die.

The first quad-core processors simply utilized existing designs and duplicated them. AMD did this on one die and added the memory controller and a crossbar switch, while Intel simply placed two single-core dies into a processor package to create the first dual-core.

The first cache that was shared between two cores was the Core 2 Duo's L2. AMD labored away and created its Phenom quad-core from scratch, while Intel decided once again to pair two dies—this time two Core 2 dual-cores—in an effort to create economical quad-cores.

Third level cache has existed since the early days of Alpha’s 21165 (96KB, released in 1995) or IBM’s Power 4 (256KB, 2001). However, it wasn’t until the advent of Intel’s Itanium 2, the Pentium 4 Extreme (Gallatin, both in 2003), and the Xeon MP (2006) that L3 caches were used on x86 and related architectures.

First implementations represented just an additional level, while recent architectures provide the L3 cache as a large and shared data buffer on multi-core processors. The high associativity underlines this. It’s preferable to seach a little longer inside the cache memory than have several cores trigger slow memory accesses. AMD was first to introduce L3 cache on a desktop product, namely the Phenom family. The 65nm Phenom X4 offered 2MB of shared L3 cache, while the current 45nm Phenom II X4 comes with 6MB of shared L3. Intel’s Core i7 and i5 both feature 8MB of L3 cache.

The latest quad-core processors come with dedicated L1 and L2 caches for each core and a larger, shared L3 cache available for all cores. This shared L3 is also able to exchange data the cores might be working on in parallel.

It makes sense to equip multi-core processors with a dedicated memory utilized jointly by all available cores. In this role, fast third-level cache (L3) can accelerate access to frequently needed data. Cores should not revert to accessing the slower main memory (RAM) whenever possible.

That’s the theory, at least. AMD’s recent launch of the Athlon II X4, which is fundamentally a Phenom II X4 without the L3, implies that the tertiary cache may not always be necessary. We decided to do an apples to apples comparison using both options and find out.

How Cache Works - The principle of caches is rather simple. They buffer data as close as possible to the processing core(s) in order to avoid the CPU having to access the data from more distant, slower memory sources. Today’s desktop platform cache hierarchies consist of three cache levels before reaching system memory access. The second and especially the third levels aren’t just for data buffering. Their purpose is also to prevent choking the CPU bus with unnecessary data exchange traffic between cores.

|Processor Glossary Definitions |

| |

|[pic] |

|Architecture |

|[pic] |

| |

|[pic] |

|The size and spacing of the processor's transistors (silicon etchings), which partially determine the switching speed. The diameter of transistors is measured in |

|microns. One micron is one-millionth of a meter. The 90 nm (a nanometer is one-billionth of a meter) process combines higher-performance, lower-power transistors, |

|strained silicon, high-speed copper interconnects and a new low-k dielectric material. For more information see: |

| |

|[pic] |

| |

| |

| |

| |

| |

|[pic] |

| |

| |

| |

| |

| |

|[pic] |

|Chipset |

|[pic] |

| |

|[pic] |

|The motherboard chipset consists of a north bridge, or Memory Controller Hub (MCH), which is responsible for controlling communication between system memory, the |

|processor, AGP, and the south bridge, or I/O Controller Hub (ICH). The ICH controls communication between PCI devices, system management bus, ATA devices, AC'97, USB, |

|IEEE1397 (firewire), and LPC controller. [These controllers are soldered onto the motherboard and cannot be changed or upgraded.] |

|[pic] |

| |

| |

| |

| |

| |

|[pic] |

| |

| |

| |

| |

| |

|[pic] |

|Clock Speed |

|[pic] |

| |

|[pic] |

|The speed at which the processor executes instructions. Every processor contains an internal clock that regulates the rate at which instructions are executed. It is |

|expressed in Megahertz (MHz), which is 1 million cycles per second or Gigahertz (GHz), which is 1 billion cycles per second. |

|[pic] |

| |

| |

| |

| |

| |

|[pic] |

| |

| |

| |

| |

| |

|[pic] |

|Front Side Bus Speed |

|[pic] |

| |

|[pic] |

|The speed of the bus that connects the processor to main memory (RAM). As processors have become faster and faster, the system bus has become one of the chief |

|bottlenecks in modern PCs. Typical bus speeds are 400 MHz, 533 MHz, 667 MHz, and 800 MHz. |

|[pic] |

| |

| |

| |

| |

| |

|[pic] |

| |

| |

| |

| |

| |

|[pic] |

|L2 Cache |

|[pic] |

| |

|[pic] |

|The size of 2nd level cache. L2 Cache is ultra-fast memory that buffers information being transferred between the processor and the slower RAM in an attempt to speed |

|these types of transfers. |

|[pic] |

| |

| |

| |

| |

| |

|[pic] |

| |

| |

| |

| |

| |

|[pic] |

|L3 Cache |

|[pic] |

| |

|[pic] |

|The size of 3rd level cache, typically larger then L2. L3 Cache is ultra-fast memory that buffers information being transferred between the processor and the slower RAM |

|in an attempt to speed these types of transfers. Integrated Level 3 cache provides a faster path to large data sets stored in cache on the processor. This results in |

|reduced average memory latency and increased throughput for larger High-end Desktop workloads. |

|[pic] |

| |

| |

| |

| |

| |

|[pic] |

| |

| |

| |

| |

| |

|[pic] |

|Memory Type |

|[pic] |

| |

|[pic] |

|Random Access Memory (RAM) is fast but temporary data storage space. Each chipset supports one type of memory: SDR SDRAM, DDR SDRAM, or RDRAM. SDR (Single Data Rate) |

|SDRAM and RDRAM (Rambus) are older memory technologies that are no longer supported by current Intel chipsets. DDR (Double Data Rate) SDRAM has two transfers for every |

|one transfer with SDR SDRAM. Dual Channel DDR SDRAM transfers data four times for every one transfer with SDR SDRAM. For more information click here. |

|[pic] |

| |

| |

| |

| |

| |

|[pic] |

| |

| |

| |

| |

| |

|[pic] |

|Package |

|[pic] |

| |

|[pic] |

|The physical packaging or form factor (size, shape, number and layout of the pins or contacts) in which the processor is manufactured. There are many different package |

|types for Intel® processors. See the Processor Package Type Guide for photos and details. |

|[pic] |

| |

| |

| |

| |

| |

|[pic] |

| |

| |

| |

| |

| |

|[pic] |

|Pin Count |

|[pic] |

| |

|[pic] |

|When processors are manufactured using pin grid array (PGA) packaging, the back-side of the processor has protruding pins. The amount of pins on the processor, along |

|with the layout of the pins, is a gating factor for which processors a particular motherboard can support. The socket that is soldered onto a motherboard cannot be |

|changed, so only pin-compatible processors will be supported. |

|[pic] |

| |

| |

| |

| |

| |

|[pic] |

| |

| |

| |

| |

| |

|[pic] |

|Slot/Socket Type |

|[pic] |

| |

|[pic] |

|A motherboard is designed for a certain range of processors. One of the determining factors of processor compatibility is the slot or socket connector soldered onto the |

|board. 242-contact and 330-contact slot connectors were used for a short time to allow for L2 cache to be packaged close to the processor die. Processor manufacturing |

|advancements now allow L2 cache to be manufactured on the same die as the processor, requiring a smaller form-factor processor packaging. PGA (pin grid array) sockets |

|are more common, flexible, and compact, but have many variations in the amount of pin connects and pin layouts. |

|[pic] |

| |

| |

| |

| |

| |

|[pic] |

| |

| |

| |

| |

| |

|[pic] |

|sSpec Number |

|[pic] |

| |

|[pic] |

|Also known as specification number. A five character string (SL36W, XL2XL, etc.) that is printed on the processor, and used to identify the processor. By knowing the |

|processor's sSpec Number, you can find out the processor's core speed, cache size and speed, core voltage, maximum operating temperature and so on. |

|[pic] |

| |

| |

| |

| |

| |

|[pic] |

| |

| |

| |

| |

| |

[pic][pic]

What Is CPU Virtualization?



CPU virtualization involves a single CPU acting as if it were two separate CPUs. In effect, this is like running two separate computers on a single physical machine. Perhaps the most common reason for doing this is to run two different operating systems on one machine.

The CPU, or central processing unit, is arguably the most important component of the computer. It is the part of the computer which physically carries out the instructions of the applications which run on the computer. The CPU is often known simply as a chip or microchip.

The way in which the CPU interacts with applications is determined by the computer's operating system. The best known operating systems are Microsoft Windows®, Mac OS® and various open-source systems under the Linux banner. In principle a CPU can only operate one operating system at a time. It is possible to install more then one system on a computer's hard drive, but normally only one can be running at a time.

The aim of CPU virtualization is to make a CPU run in the same way that two separate CPUs would run. A very simplified explanation of how this is done is that virtualization software is set up in a way that it, and it alone, communicates directly with the CPU. Everything else which happens on the computer passes through the software. The software then splits its communications with the rest of the computer as if it were connected to two different CPUs.

One use of CPU virtualization is to allow two different operating systems to run at once. As an example, an Apple computer could use virtualization to run a version of Windows® as well, allowing the user to run Windows®-only applications. Similarly a Linux-based computer could run Windows® through virtualization. It's also possible to use CPU virtualization to run Windows® on a Mac® or Linux PC, or to run Mac OS® and Linux at the same time.

Another benefit of virtualization is to allow a single computer to be used by multiple people at once. This would work by one machine with a CPU running virtualization software, and the machine then connecting to multiple "desks," each with a keyboard, mouse and monitor. Each user would then be running their own copy of the operating system through the same CPU. This set-up is particularly popular in locations such as schools in developing markets where budgets are tight. It works best where the users are mainly running applications with relatively low processing demands such as web browsing and word processing.

CPU virtualization should not be confused with multitasking or hyperthreading. Multitasking is simply the act of running more than one application at a time. Every modern operating system allows this to be done on a single CPU, though technically only one application is dealt with at any particular moment. Hyperthreading is where compatible CPUs can run specially written applications in a way that carries out two actions at the same time.

AMD HyperTransport technology

By AMD definition "the HyperTransport technology I/O link is a narrow, high speed, lower power I/O bus that has been designed to meet the requirements of the embedded markets, the desktop, workstation, and server markets, and networking and communication markets."

To de-PR babble this statement, HyperTransport technology simply means a faster connection that is able to transfer more data between two chips. This does not mean that the chip itself is faster. It means that the capability exists via the HyperTransport pathway for one chip to "talk" to another chip or device at a faster speed and with greater data throughput.

Think of a HyperTransport I/O link as a highway between two cities with the cars being data; If there are a lot of cars on a two lane highway, then there are going to be traffic jams and possibly a few fender benders and scrapes. The HyperTransport bus makes the highway wider and faster allowing for better traffic flow. This does not mean the cars are any faster; that is up to the car builder but the road is able to accommodate more cars that may have bigger engines and the ability to carry more.

This highway or BUS is an internal connection. On the motherboard level, the HyperTransport bus connects all parts of the motherboard, such as the PCI slots, AGP slots and USB ports to the CPU and memory and also provides the connection between the CPU and memory itself (although it is a bit more complicated than this.

So what...will it be faster?

The simple answer is yes, but how much faster depends on how HyperTransport technology is implemented. Keep in mind the old saying, "you are only as good as your weakest link." HyperTransport is technology that can be incorporated into any particular component or device in a PC. It's like a tune-up for the car engine. If all vehicles had the same tune-up, then they would all run faster or have more horsepower or at least have greater fuel efficiency; but each in their own way. HyperTransport technology raises the performance bar in two ways.

1) Existing performance increases.

If a person were to attach existing components, such as the video card, processor, ram and hard drive, on a HyperTransport technology-based motherboard, the components themselves would not be faster but would have the ability to talk to each other at a faster rate and with reduced latency, just a few benefits that the bus provides. There would be a performance increase.

2) Future performance increases

The next step is to build all the components integrated with HyperTransport technology allowing for the individual components themselves to be faster, and then provide a pathway between them that is faster and has the ability to handle more data. HyperTransport is a technology that provides a new link between devices such as between an integrated video chip, the I/O hub (South bridge), 64-bit connections such as PCI-X and PCI-X 2.0, memory and the CPU. HyperTransport technology can be applied to nearly every pathway that communicates data between two points. Therefore if HyperTransport technology is applied to everything inside a desktop PC, then the performance bar is raised even more.

Intel Hyper-Threading

Hyper-Threading technology is a technique which enables a single CPU to act like multiple CPU's.

A CPU is made up of many smaller components. At any given time, one of these components might be busy, while the other components are waiting to be utilized.

Hyper-Threading enables different parts of the CPU to work on different tasks concurrently. In this way, a CPU with Hyper-Threading appears to be more than one CPU.

A CPU with Hyper-Threading has two sets of the circuits which keep track of the state of the CPU. This includes most of the registers and the instruction pointer. These circuits do not accomplish the actual work of the CPU, they are the temporary storage facilities where the CPU keeps track of what it is currently being working on.

The vast majority of the CPU remains unchanged. The portions of the CPU which do computational work are not replicated, nor are the onboard L1 and L2 caches.

Hyper-Threading duplicates about 5% of the circuits of the CPU. Depending upon the software applications in use, Hyper-Threading can results in a performance increase up to six times that amount.

Everything You Need to Know About The QuickPath Interconnect (QPI)

|[pic][pic][pic]Just like HyperTransport, QuickPath Interconnect provides two separate lanes for the communication between the CPU and the chipset, as you can see in |

|Figure 3. This allows the CPU to transmit (“write”) and receive (“read”) I/O data at the same time (i.e., in parallel). On the traditional architecture using a single |

|external bus since the external bus is used for both input and output operations reads and writes cannot be done at the same time. |

| |

|Speaking of chipsets, Intel will initially launch single-chip solutions. Since on CPUs with embedded memory controllers the equivalent of the north bridge chip is |

|embedded inside the CPU, the chipset works as the south bridge chip or “I/O Hub” or simply “IOH” on Intel’s lingo. |

|Figure 3: The QuickPath Interconnect provides separated input and output datapaths. |

| |

| |

| |

|So, how the QuickPath Interconnect works? |

|Each lane transfers 20 bits per time. From these 20 bits, 16 bits are used for data and the remaining 4 bits are used for a correction code called CRC (Cyclical |

|Redundancy Check), which allows the receiver to check if the received data is intact. |

|The first version of the QuickPath Interconnect will work with a clock rate of 3.2 GHz transferring two data per clock cycle (a technique called DDR, Double Data Rate), |

|making the bus to work as if it was using a 6.4 GHz clock rate (Intel uses the GT/s unit – which means giga transfers per second – to represent this). Since 16 bits are |

|transmitted per time, we have a maximum theoretical transfer rate of 12.8 GB/s on each lane (6.4 GHz x 16 bits / 8). You will see some people saying that the QuickPath |

|Interconnect has a maximum theoretical transfer rate of 25.6 GB/s because they simple multiply the transfer rate by two to cover the two datapaths. We don’t agree with |

|this methodology. In brief, it is as if we said that a highway has a speed limit of 130 MPH just because there is a speed limit of 65 MPH in each direction. It makes no |

|sense. |

|So compared to the front side bus QuickPath Interconnect transmits fewer bits per clock cycle but works at a far higher clock rate. Currently the fastest front side bus |

|available on Intel processors is of 1,600 MHz (actually 400 MHz transferring four data per clock cycle, so QuickPath Interconnect works with a  base clock eight times |

|higher), meaning a maximum theoretical transfer rate of 12.8 GB/s, the same as QuickPath. QPI, however, offers 12.8 GB/s on each direction, while a 1,600 MHz front side |

|bus provides this bandwidth for both read and write operations – and both cannot be executed at the same time on the FSB, limitation not present on QPI. Also since the |

|front side bus transfers both memory and I/O requests, there are always more data being transferred on this bus compared to QPI, which carries only I/O requests. So QPI |

|will work “less busy” and thus having more bandwidth available. |

|QuickPath Interconnect is also faster than HyperTransport. The maximum transfer rate of HyperTransport technology is 10.4 GB/s (which is already slower than QuickPath |

|Interconnect), but current Phenom processors use a lower transfer rate of 7.2 GB/s. So Intel Core i7 CPU will have an external bus 78% faster than the one used on AMD |

|Phenom processors. Other CPUs from AMD like Athlon (formerly known as Athlon 64) and Athlon X2 (formerly known as Athlon 64 X2) use an even lower transfer rate, 4 GB/s – |

|QPI is 220% faster than that. |

|Going down to the electrical transmission, each bit is transferred using a differential pair (please read this tutorial to understand how differential transmission |

|works). So for each bit two wires are used. QuickPath Interconnect uses a total of 84 wires (including the two lanes), which is roughly half the number of wires used on |

|the front side bus of current Intel CPUs (150 wires). So the third advantage of QuickPath Interconnect over front side bus is using less wires (in case you are wondering,|

|the first advantage is providing separated datapaths for memory and I/O requests and the second advantage is providing separated datapaths for reads and writes). |

|QuickPath uses a layered architecture (i.e., similar to the architecture used on networks) with four layers: Physical, Link, Routing and Protocol. |

Core i3 vs i5 vs i7: A Summary of Intel's Processors

Written by: M.S. Smith • Edited by: J. F. Amprimoz

Updated Feb 28, 2011

Intel has now released the new Sandy Bridge architecture, and this has resulted in a re-launch of the Core i3, i5 and i7 brands. Thankfully, this means the i3 vs i5 vs i7 battle is no longer such a nightmare.

i3 vs i5 vs i7: A Branding Dream

Intel's previous Core i3, i5 and i7 branding was a supreme pain in the butt. It was difficult to explain because Intel didn't really divide features along the brands evenly. Processors in the same brand didn't even alway use the same socket. This made explaining the differences between processors extremely difficult.

Now Intel has introduced Sandy Bridge, the new architecture for its processor. It has also re-launched its products using the same Core i3, i5 and i7 brands, but with new products inserted. To represent this, Intel has moved to a 4-number naming scheme, with the processors being numbered 2100, 2500, and etc.

Thankfully, this re-launch has cleared up the product line significantly. The features available on different processors are now much clearer. Let's take a look at what each brand of Intel processor offers you.

Core i3 Series

Intel's Core i3 processor line has always been a budget option. These processors remain dual-core, unlike the rest of the Core line, which is made up of quad core processors. Intel's Core i3 processors also have many features restricted.

The main feature that is kept from the Core i3 processors is Turbo Boost, the dynamic overclocking available on most Intel processors. This, alongside with the dual-core design, accounts for most of the performance difference between Core i3 processors and the i5 and i7 options.

Core i3 processors also lack Intel's vPro technology virtualizaton and AES encryption acceleration technology. These are features unlikely to appeal to your average user anyway, and are instead targeted towards enterprise users. Still, the lack of these features should be kept in mind.

One feature that Core i3 has - and i5 doesn't - is hyper-threading. This is Intel's logic-core duplication technology which allows each physical core to be used as two logic cores. The result of this is that Windows will display a dual-core Core i3 processor as if it were a quad-core.

Finally, Core i3 processors have their integrated graphics processor restricted to a maximum clock speed of 1100 MHz, and all Core i3 processors have the 2000 series IGP, which is restricted to 6 execution cores. This will result in slightly lower IGP performance overall, but the difference is frankly inconsequential in many situations.

Core i5 Series

Intel used to split the Core i5 processor brand into two different lines, one of which was dual-core and one of which was quad-core. This was, needless to say, a bit confusing for buyers.

Thankfully, the behavior has stopped (for now). All Sandy Bridge Core i5 processors are quad-core processors, they all have Turbo Boost, and they all lack Hyper-Threading. Most of the Core i5 processors, besides the K series (explained later) us the same 2000 series IGP with a maximum clock speed of 1100 MHz and six execution cores.

In the i3 vs i5 vs i7 battle, the Core i5 processor is now obviously the main-stream option no matter which product you buy. The only substantial difference between the Core i5 options is the clock speed, which ranges from 2.8 GHz to 3.3 GHz. Obviously, the products with a quicker clock speed are more expensive than those that are slower.

NOTE: As of 2/20/2011, Intel has introduced a dual-core Core i5 called the 2390T. The T appears to be what designates it as a dual-core part. It is the only dual-core Core i5 as of yet, so hopefully Intel has introduced this as some sort of exception, as a return to the confusion of the first-gen Core i5 parts would be disappointing.

Core i7 Series

The Intel Core i7 series has also been cleaned up. In fact, it has perhaps been cleaned up too much, because at the moment Intel is offering only two Sandy Bridge Core i7 processors.

These processors are virtually identical to the Core i5. They have a 100 MHz higher base clock speed, which is inconsequential in most situations. The real feature difference is the addition of hyper-threading on the Core i7, which means that the processor will appear as an 8-core processor in Windows. This improves threaded performance and can result in a substantial boost if you're using a program that is able to take advantage of 8 threads.

Of course, most programs can't take advantage of 8 threads. Those that can are almost usually meant for enterprise or advanced video editing applications - 3D rendering programs, photo editing programs, and scientific programs are categories of software frequently designed to use 8 threads. The average user is unlikely to see the full benefit of the hyper-threading feature. In the Core i3 vs i5 vs i7 battle, the i7 has limited appeal.

The IGP on Core i7 processors can also reach a higher maximum clock speed of 1350 MHz. As I've said before,

The K series processor

Late in the lifespan of Intel's previous Core i branded products, Intel introduced the "K" series. These processors had unlocked multipliers, making them easier to overclock.

Intel has kept this line of products alive with the new Sandy Bridge architecture by introducing a K series Core i5 and i7 processor. As before, these processors have unlocked multipliers. However, they also have a new feature - better integrated graphics processors.

This comes in the form of the 3000 series IGP, which has 12 execution cores instead of 6. The maximum clock speed remains limited by the processor brand - the Core i5 K is limited to 1100 MHz, while the Core i7 K can reach 1350 MHz. The additional execution cores can result in better performance in games, although to honest, the IGP isn't remotely cut out for desktop gaming.

Sockets and Chipsets

The sockets and chipsets also used to be a stumbling block for those wanting to build a new system with an Intel Core processor. Different processors from the same brand used different sockets.

That's no longer the case. All of the new Intel processors use the same LGA 1155 socket and are compatible with the new P67 and H67 chipsets. This makes choosing compatible hardware relatively painless. Rumor has it that this state of affairs won't last forever, as Intel likely intends on releasing an even quicker Sandy Bridge variant on a new chipset later this year. For now, however, choosing the right socket and chipset is a breeze.

Buying Advice

Intel's Core i5 processor line remains the one to buy. The quad-core i5 processors are extremely quick, and have all of the features that are important, such as Turbo Boost. They're also reasonably priced, however, with the 2.8 GHz variant starting at just under $180 bucks. That's not a bargain, but considering the performance - which is far in excess of Intel's previous Core i5 processors and AMD's quad-core offerings - it's a good value.

Still, the i3 processor should be considered if you're not looking for a performance speed-demon. We reached the point at which a basic processor proved capable of offering adequate day-to-day performance years ago. Tasks such as HD video, basic video transcoding and productivity applications will easily be conquered by the least expensive i3.

Finally, we have the i7. In the i3 vs i5 vs i7 battle, the Core i7 is the hardest to recommend. Hyper-threading is great, but only if you use specific applications that can take advantage of 8 threads. If you don't, there isn't much reason to spend the extra dough.

If it were my money, I'd buy the Core i5-2500K. This $216 processor is easy to overclock, has a base clock speed of 3.3 GHz, and offers four cores. This recommendation may change as new processors are introduced to flesh out the line, but I suspect this processor will become the Core i5-750 of the Sandy Bridge line; a reliable pick that remains the best value even a year and a half down the road.

What's the difference between an Intel Core i3, i5 and i7?

We take a look at Intel's Sandy Bridge family of chips

• David Parkinson (PC World Australia (online))

• — 11 May, 2011 11:07

Intel Core i3, Core i5, and Core i7 CPUs have been around for over a year now, but some buyers still get stumped whenever they attempt to build their own systems and are forced to choose among the three. With the more recent Sandy Bridge architecture now on store shelves, we expect the latest wave of buyers to ask the same kind of questions.

Core i3, Core i5, Core i7 — the difference in a nutshell

If you want a plain and simple answer, then generally speaking, Core i7s are better than Core i5s, which are in turn better than Core i3s. Nope, Core i7 does not have seven cores nor does Core i3 have three cores. The numbers are simply indicative of their relative processing powers.

Their relative levels of processing power are also signified by their Intel Processor Star Ratings, which are based on a collection of criteria involving their number of cores, clockspeed (in GHz), size of cache, as well as some new Intel technologies like Turbo Boost and Hyper-Threading.

Core i3s are rated with three stars, i5s have four stars, and i7s have five. If you’re wondering why the ratings start with three, well they actually don’t. The entry-level Intel CPUs — Celeron and Pentium — get one and two stars respectively.

[pic]

Note: Core processors can be grouped in terms of their target devices, i.e., those for laptops and those for desktops. Each has its own specific characteristics/specs. To avoid confusion, we’ll focus on the desktop variants. Note also that we’ll be focusing on the 2nd Generation (Sandy Bridge) Core CPUs.

Number of cores

The more cores there are, the more tasks (known as threads) can be served at the same time. The lowest number of cores can be found in Core i3 CPUs, i.e., which have only two cores. Currently, all Core i3s are dual-core processors.

Currently all Core i5 processors, except for the i5-661, are quad cores in Australia. The Core i5-661 is only a dual-core processor with a clockspeed of 3.33 GHz. Remember that all Core i3s are also dual cores. Furthermore, the i3-560 is also 3.33GHz, yet a lot cheaper. Sounds like it might be a better buy than the i5. What gives?

At this point, I’d like to grab the opportunity to illustrate how a number of factors affect the overall processing power of a CPU and determine whether it should be considered an i3, an i5, or an i7.

Even if the i5-661 normally runs at the same clockspeed as Core i3-560, and even if they all have the same number of cores, the i5-661 benefits from a technology known as Turbo Boost.

Intel Turbo Boost

The Intel Turbo Boost Technology allows a processor to dynamically increase its clockspeed whenever the need arises. The maximum amount that Turbo Boost can raise clockspeed at any given time is dependent on the number of active cores, the estimated current consumption, the estimated power consumption, and the processor temperature.

For the Core i5-661, its maximum allowable processor frequency is 3.6 GHz. Because none of the Core i3 CPUs have Turbo Boost, the i5-661 can outrun them when it needs to. Because all Core i5 processors are equipped with the latest version of this technology — Turbo Boost 2.0 — all of them can outrun any Core i3.

Cache size

Whenever the CPU finds that it keeps on using the same data over and over, it stores that data in its cache. Cache is just like RAM, only faster — because it’s built into the CPU itself. Both RAM and cache serve as holding areas for frequently used data. Without them, the CPU would have to keep on reading from the hard disk drive, which would take a lot more time.

Basically, RAM minimizes interaction with the hard disk, while cache minimizes interaction with the RAM. Obviously, with a larger cache, more data can be accessed quickly. All Core i3 processors have 3MB of cache. All Core i5s, except again for the 661 (only 4MB), have 6MB of cache. Finally, all Core i7 CPUs have 8MB of cache. This is clearly one reason why an i7 outperforms an i5 — and why an i5 outperforms an i3.

Hyper-Threading

Strictly speaking, only one thread can be served by one core at a time. So if a CPU is a dual core, then supposedly only two threads can be served simultaneously. However, Intel has introduced a technology called Hyper-Threading. This enables a single core to serve multiple threads.

For instance, a Core i3, which is only a dual core, can actually serve two threads per core. In other words, a total of four threads can run simultaneously. Thus, even if Core i5 processors are quad cores, since they don’t support Hyper-Threading (again, except the i5-661) the number of threads they can serve at the same time is just about equal to those of their Core i3 counterparts.

This is one of the many reasons why Core i7 processors are the creme de la creme. Not only are they quad cores, they also support Hyper-Threading. Thus, a total of eight threads can run on them at the same time. Combine that with 8MB of cache and Intel Turbo Boost Technology, which all of them have, and you’ll see what sets the Core i7 apart from its siblings.

The upshot is that if you do a lot of things at the same time on your PC, then it might be worth forking out a bit more for an i5 or i7. However, if you use your PC to check emails, do some banking, read the news, and download a bit of music, you might be equally served by the cheaper i3.

We regularly hear across the sales counter, “I don’t mind paying for a computer that will last, which CPU should I buy?” The sales tech invariably responds “Well that depends on what you use your computer for.” If it’s the scenario described above, we pretty much tell our customers to save their money and buy an i3 or AMD dual core.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download