How Computers Work (EMMA) Orientation



Computer Hardware Outline - Spring Week #5

Desktop Motherboards – SouthBridge

Parallel Communication Devices

Storage Devices – Hard Drives & Optical Drives

PATA (Parallel ATA) 1xATA133, 2 Devices Max (133 MB/s)

Expansion Slots PCI Bus Slots (32 bit) 133 MB/s

Serial Communication Devices

Storage Devices – Hard Drives & Optical Drives

SATA 1.5 Gb/s or 3.0 Gb/s

SATA RAID 0/1/2/3/4/5/6

USB (Universal Serial Bus)

Low Speed (USB 1.0) 1.5 Mb/s

Full Speed (USB 1.1) 12 Mb/s

High Speed (USB 2.0) 480 Mb/s

Super Speed (USB 3.0) 4.8Gb/s (under development) (fiber optic link)

Firewire (IEEE 1394)

BIOS

Super I/O

Serial/Parallel Ports, Floppy, Keybd/Mouse

Integrated Motherboards

Onboard Video video chipset (mostly on MATX boards)

Onboard LAN 10/100/1000 Mbps

Onboard Audio 6/8 channel

Rear Panel Ports PS/2 x 2, Com, LPT, Video D-Sub/DVI,

USB 2.0, IEEE 1394, Audio, eSATA

OnBoard USB via slot and front panel connectors

Physical Specifications

Form Factor – ATX, MATX

Dimensions

Manufacturer’s Product Page via

Documentation Online – PATA, SATA, RAID, USB, Firewire, Motherboard Onboard Devices

Homework - Motherboard SouthBridge Quiz Online

Newegg Wishlist – Specify Motherboard

AT Attachment

Advanced Technology Attachment (ATA) is a standard interface for connecting storage devices such as hard disks and CD-ROM drives inside personal computers.

Many synonyms and near-synonyms for ATA exist, including abbreviations such as IDE and ATAPI. Also, with the market introduction of Serial ATA in 2003, the original ATA was retroactively renamed Parallel ATA (PATA). In line with the original naming, this article covers only Parallel ATA. Parallel ATA standards allow cable lengths up to only 18 inches (46 centimetres) although cables up to 36 inches (91 cm) can be readily purchased. Because of this length limit, the technology normally appears as an internal computer storage interface. It provides the most common and the least expensive interface for this application.

History

The name of the standard was originally conceived as PC/AT Attachment as its primary feature was a direct connection to the 16-bit ISA bus then known as 'AT bus'; the name was shortened to an inconclusive "AT Attachment" to avoid possible trademark issues.

An early version of the specification, conceived by Western Digital in 1986, was commonly known as Integrated Drive Electronics (IDE) due to the drive controller being contained on the drive itself as opposed to the then-common configuration of a separate controller connected to the computer's motherboard — thus making the interface on the motherboard a host adapter, though many people continue, by habit, to call it a controller.

Enhanced IDE (EIDE) — an extension to the original ATA standard again developed by Western Digital — allowed the support of drives having a storage capacity larger than 504 MBs (528 MB), up to 7.8 GBs (8.4 GB). Although these new names originated in branding convention and not as an official standard, the terms IDE and EIDE often appear as if interchangeable with ATA. This may be attributed to the two technologies being introduced with the same consumable devices — these "new" ATA hard drives.

With the introduction of Serial ATA around 2003, conventional ATA was retroactively renamed to Parallel ATA (P-ATA), referring to the method in which data travels over wires in this interface.

The interface at first worked only with hard disks, but eventually an extended standard came to work with a variety of other devices — generally those using removable media. Principally, these devices include CD-ROM and DVD-ROM drives, tape drives, and large-capacity floppy drives such as the Zip drive and SuperDisk drive. The extension bears the name AT Attachment Packet Interface (ATAPI), which started as non-ANSI SFF-8020 standard developed by Western Digital and Oak Technologies, but then included in the full standard now known as ATA/ATAPI starting with version 4. Removable media devices other than CD and DVD drives are classified as ARMD (ATAPI Removable Media Device) and can appear as either a super-floppy (non-partitioned media) or a hard drive (partitioned media) to the operating system.

The original ATA specification used a 28-bit addressing mode. This allowed for the addressing of 228 (268,435,456) sectors (with blocks of 512 bytes each), resulting in a maximum capacity of 137 gigabytes (128 GB). The standard PC BIOS system supported up to 7.88 GB (8.46 GB), with a maximum of 1024 cylinders, 256 heads and 63 sectors. When the lowest common denominators of the CHS limitations in the standard PC BIOS system and the IDE standard were combined, the system as a whole was left limited to a mere 504 megabytes. BIOS translation and LBA were introduced, removing the need for the CHS structure on the drive itself to match that used by the BIOS and consequently allowing up to 7.88 GB when accessed through Int 13h interface. This barrier was overcome with Int 13H extensions, which used 64 bit linear address and therefore allowed access to the full 128 GB and more (although some BIOSes initially had problems handling more than 31.5 GiB due to a bug in implementation).

ATA-6 introduced 48 bit addressing, increasing the limit to 128 PB (or 144 petabytes). Some OS environments, including Windows 2000 until Service Pack 3, did not enable 48-bit LBA by default, so the user was required to take extra steps to get full capacity on a 160 GB drive.

All these size limitations come about because some part of the system is unable to deal with block addresses above some limit. This problem may manifest itself by the system recognizing no more of a drive than that limiting value, or by the system refusing to boot and hanging on the BIOS screen at the point when drives are initialized. In some cases, a BIOS upgrade for the motherboard will resolve the problem. This problem is also found in older external FireWire disk enclosures, which limit the usable size of a disk to 128 GB. By early 2005 most enclosures available have practically no limit. (Earlier versions of the popular Oxford 911 FireWire chipset had this problem. Later Oxford 911 versions and all Oxford 922 chips resolve the problem.)

Parallel ATA interface

Until the introduction of Serial ATA, 40-pin connectors generally attached drives to a ribbon cable. Each cable has two or three connectors, one of which plugs into an adapter that interfaces with the rest of the computer system. The remaining one or two connectors plug into drives. Parallel ATA cables transfer data 16 bits at a time (it is a common misconception that they transfer 32 bits of data at a time, mainly because the 40 cable ribbon would appear to allow this).

ATA's ribbon cables had 40 wires for most of its history, but an 80-wire version appeared with the introduction of the Ultra DMA/66 (UDMA4) mode. All of the additional wires in the new cable are ground wires, interleaved with the previously defined wires. The interleaved ground wire reduces the effects of capacitive coupling between neighboring signal wires, thereby reducing crosstalk. Capacitive coupling is more of a problem at higher transfer rates, and this change was necessary to enable the 66 megabytes per second (MB/s) transfer rate of UDMA4 to work reliably. The faster UDMA5 and UDMA6 modes also require 80-conductor cables.

Though the number of wires doubled, the number of connector pins and the pinout remain the same as on 40-conductor cables, and the external appearance of the connectors is identical. Internally, of course, the connectors are different: The connectors for the 80-wire cable connect a larger number of ground wires to a smaller number of ground pins, while the connectors for the 40-wire cable connect ground wires to ground pins one-for-one. 80-wire cables usually come with three differently colored connectors (blue, gray & black) as opposed to uniformly colored 40-wire cable's connectors (all black). The gray connector has pin 28 CSEL not connected; this makes it the slave position for drives configured cable select.

Multiple devices on a cable

If two devices attach to a single cable, one is commonly referred to as a master and the other as a slave. The master drive generally appears first when the computer's BIOS and/or operating system enumerates available drives. On old BIOSes (486 era and older) the drives are often referred to by the BIOS as "C" for the master and "D" for the slave following the way DOS would refer to the active primary partitions on each.

If there is a single device on a cable, in most cases it should be configured as master. However, some hard drives have a special setting called single for this configuration (Western Digital, in particular). Also, depending on the hardware and software available, a single drive on a cable can work reliably even though configured as the slave drive (this configuration is most often seen when a CDROM has a channel to itself).

Cable select

A drive setting called cable select was described as optional in ATA-1 and has come into fairly widespread use with ATA-5 and later. A drive set to "cable select" automatically configures itself as master or slave, according to its position on the cable. Cable select is controlled by pin 28. The host adapter grounds this pin; if a device sees that the pin is grounded, it becomes the master device; if it sees that pin 28 is open, the device becomes the slave device.

This setting is usually chosen by placing a jumper on the "cable select" position, usually marked CS, rather than on the "master" or "slave" position.

With the 40-wire cable it was very common to implement cable select by simply cutting the pin 28 wire between the two device connectors. This puts the slave device at the end of the cable, and the master on the "middle" connector. This arrangement eventually was standardized in later versions of the specification. If there is just one device on the cable, this results in an unused "stub" of cable. This is undesirable, both for physical convenience and electrical reasons: The stub causes signal reflections, particularly at higher transfer rates.

When the 80-wire cable was defined for use since ATAPI5/UDMA4, the master device goes at the end of the 18-inch cable(black connector), the middle-slave connector is grey, the blue connector goes onto the motherboard. So, if there is only one (master)device on the cable, there is no cable "stub" to cause reflections. Also, cable select is now implemented in the slave device connector, usually simply by omitting the contact from the connector body. Both the 40-wire and 80-wire parallel-IDE cables share the same 40-socket connector configuration.

Master and slave clarification

Although they are in extremely common use, the terms master and slave do not actually appear in current versions of the ATA specifications. The two devices are correctly referred to as device 0 (master) and device 1 (slave), respectively. It is a common myth that "the master drive arbitrates access to devices on the channel" or that "the controller on the master drive also controls the slave drive." In fact, the drivers in the host operating system perform the necessary arbitration and serialization (as described in the next section), and each drive's controller operates independently. There is therefore no suggestion in the ATA protocols that one device has to ask the other if it can use the channel. Both are really "slaves" to the driver in the host OS.

Two devices on one cable - speed impact

There are many debates about how much a slow device can impact the performance of a faster device on the same cable. There is an effect, but the debate is confused by the blurring of two quite different causes, called here "Slowest Speed" and "One Operation at a Time".

"Slowest speed"

It is a common misconception that, if two devices of different speed capabilities are on the same cable, both devices' data transfers will be constrained to the speed of the slower device.

For all modern ATA host adapters (since the PIIX4 south bridge was introduced in 1997) this is not true, as modern ATA host adapters support independent device timing. This allows each device on the cable to transfer data at its own best speed.

Even with older adapters without independent timing, this effect only impacts the data transfer phase of a read or write operation. This is usually the shortest part of a complete read or write operation (except for burst mode transfers).

"One operation at a time"

This is a much more important effect. It is caused by the omission of both overlapped and queued feature sets from most parallel ATA products. This means that only one device on a cable can perform a read or write operation at one time. Therefore, a fast device on the same cable as a slow device under heavy use will find that nearly every time it is asked to perform a transfer, it has to wait for the slow device to finish its own ponderous transfer.

For example, consider an optical device such as a DVD-ROM, and a hard drive on the same parallel ATA cable. With average seek and rotation speeds for such devices, a read operation to the DVD-ROM will take an average of around 100 milliseconds, while a typical fast parallel ATA hard drive can complete a read or write in less than 10 milliseconds. This means that the hard drive, if unencumbered, could perform more than 100 operations per second (and far more than that if only short head movements are involved). But since the devices are on the same cable, once a "read" command is given to the DVD-ROM, the hard drive will be inaccessible (and idle) for as long as it takes the DVD-ROM to complete its read—seek time included.

Frequent accesses to the DVD-ROM will therefore vastly reduce the maximum throughput available from the hard drive. If the DVD-ROM is kept busy with average-duration requests, and if the host operating system driver sends commands to the two drives in a strict "round robin" fashion, then the hard drive will be limited to about 10 operations per second while the DVD-ROM is in use, even though the burst data transfers to and from the hard drive still happen at the hard drive's usual speed.

The impact of this on a system's performance depends on the application. For example, when copying data from an optical drive to a hard drive (such as during software installation), this effect probably doesn't matter: Such jobs are necessarily limited by the speed of the optical drive no matter where it is. But if the hard drive in question is also expected to provide good throughput for other tasks at the same time, it probably should not be on the same cable as the optical drive.

Remember that this effect occurs only if the slow drive is actually being accessed. The mere presence of an idle drive will not affect the performance of the other device on the cable (for a modern host adapter which supports independent timing).

Serial ATA

Serial ATA (SATA) is a computer bus primarily designed for transfer of data between a computer and storage devices (like hard disks or optical drives).

The main benefits are thinner cables that let air cooling work more efficiently, faster transfers, ability to remove devices while operating (Hot swapping), and more reliable operation with tighter data integrity checks.

It was designed as a successor to the legacy Advanced Technology Attachment standard (ATA), and is expected to eventually replace the older technology (retroactively renamed Parallel ATA or PATA). Serial ATA adapters and devices communicate over a high-speed serial link..

Features

SATA offers performance as high as 3.0 Gbit/sec per device with the current specification. SATA uses only 4 signal lines, allowing for much more compact (and less expensive) cables compared with PATA. It also offers new features such as hot-swapping and native command queuing. There is a special connector (eSATA) specified for external devices, and an optionally implemented provision for clips on internal connectors.

Throughput

SATA 1.5 Gbit/s

First-generation SATA interfaces, also known as SATA/150 or (unofficially) as SATA 1, communicate at a rate of 1.5 gigabits per second (Gbit/s). In actual operation, SATA/150 and PATA/133 are comparable in terms of their theoretical burst-throughput. However, newer SATA devices offer enhancements (such as native command queuing) to SATA's performance in a multitask environment. For comparison, modern desktop hard drives transfer data at a maximum of ~90 MB/s, which is well within the performance capabilities of even the older PATA/133 specification.

During the initial period after SATA/150's finalization, both adapter and drive manufacturers used a "bridge chip" to convert existing designs with the PATA-interface to the SATA-interface.Bridged drives have a SATA connector, may include either or both kinds of power connectors, and generally perform identically to native drives. They generally lack support for some SATA-specific features (such as NCQ). Bridged products gradually gave way to native SATA products.

SATA 3.0 Gbit/s

Soon after SATA/150's introduction, a number of shortcomings in the original SATA were observed. At the application level, SATA's operational model emulated PATA in that the interface could only handle one pending transaction at a time. SCSI disks have long benefited from the SCSI interface's support for multiple outstanding requests, allowing the drive targets to re-order the requests to optimize response time. Native command queuing (NCQ) adds this capability to SATA. NCQ is an optional feature, and may be used in both SATA 1.5 Gbit/s or SATA 3.0 Gbit/s devices.

First-generation SATA devices were scarcely faster than legacy parallel ATA/133 devices. So a 3 Gbit/s signaling rate was added to the Physical layer (PHY layer), effectively doubling data throughput from 150 MB/s to 300 MB/s. SATA/300's transfer rate is expected to satisfy drive throughput requirements for some time, as the fastest desktop hard disks barely saturate a SATA/150 link. This is why a SATA data cable rated for 1.5 Gbit/s will currently handle second generation, SATA 3.0 Gbit/s sustained and burst data transfers without any loss of performance.

The 3.0 Gbit/s specification has been very widely referred to as “Serial ATA II” (“SATA II”), contrary to the wishes of the Serial ATA standards organization that authored it. The official website notes that SATA II was in fact that organization's name at the time, the SATA 3.0 Gbit/s specification being only one of many that the former SATA II defined, and suggests that “SATA 3.0 Gbit/s” be used instead.

SATA 6.0 Gbit/s

SATA's roadmap includes plans for a 6.0 Gbit/s standard. In current PCs, SATA 3.0 Gbit/s already greatly exceeds the sustainable (non-burst) transfer rate of even the best hard disks. The 6.0 Gbit/s standard is right now useful in combination with port multipliers, which allow multiple drives to be connected to a single Serial ATA port, thus sharing the port's bandwidth with multiple drives.[5] Solid-state drives such as RAM disks may also one day exploit the faster transfer rate.

Backward and forward compatibility

SATA and PATA

At the device level, SATA and PATA devices are completely incompatible—they cannot be interconnected. At the application level, SATA devices are specified to look and act like PATA devices.[7] In early motherboard implementations of SATA, backward compatibility allowed SATA drives to be used as drop-in replacements for PATA drives, even without native (driver-level) support at the operating system level.

The common heritage of the ATA command set has enabled the proliferation of low-cost PATASATA bridge-chips. Bridge chips were widely used on PATA drives (before the completion of native SATA drives) as well as standalone ‘dongles’. When attached to a PATA drive, a device-side dongle allows the PATA drive to function as a SATA drive. Host-side dongles allow a motherboard PATA port to function as a SATA host port.

Powered enclosures are available for both PATA and SATA drives, which interface to the PC through USB, Firewire or eSATA, with the restrictions noted above. PCI cards with a SATA connector exist that allow SATA drives to connect to legacy systems without SATA connectors.

SATA and SCSI

SCSI currently offers transfer rates higher than SATA, but is a more complex bus usually resulting in higher manufacturing cost. Some drive manufacturers offer longer warranties for SCSI devices, however, indicating a possibly higher manufacturing quality control of SCSI devices compared to PATA/SATA devices. SCSI buses also allow connection of several drives (using multiple channels, 7 or 15 on each channel), whereas SATA allows one drive per channel, unless using port multiplier.

SATA 3.0 Gbit/s offers a maximum bandwidth of 300 MB/s per device compared to SCSI with a maximum of 320 MB/s. Also, SCSI drives provide greater sustained throughput than SATA drives because of disconnect-reconnect and aggregating performance. SATA devices are generally compatible with SAS enclosures and adapters, while SCSI devices cannot be directly connected to a SATA bus.

SCSI hardware is used in enterprises for server purposes. The MTBF of SATA drives is usually about 600,000 hours (however, drives such as Western Digital Raptor have rated 1.2 million hours MTBF), while SCSI drives are rated for upwards of 1,500,000 hours. However, independent research done on hard drives reliability have indicated MTBF is not a reliable estimate of a drive's longevity.

eSATA and external buses

|  [pic] |Speed (Mbit/s)  [pic] |Max. cable length (m)  [pic] |Power provided  [pic] |Devices per Channel  [pic] |

|eSATA |2400 |2 |No |1 (15 with port multiplier) |

|SATA 300 |2400 |1 |No |1 per line |

|SATA 150 |1200 |1 |No |1 per line |

|PATA 133 |1064 |0.46 |No |2 |

|FireWire 800 |786 |4.5 |Yes (12-25 V, 15 W) |63 |

|FireWire 400 |393 |4.5 |Yes (12-25 V, 15 W) |63 |

|USB 2.0 |480 (burst) |5 |Yes (5 V, 2.5 W) |127 |

|Ultra-320 SCSI |2560 |12 |No |16 |

|Fibre Channel |4000 |2-50000 |No |~16777216 (switched fabric) |

RAID

In computing, specifically computer storage, a Redundant Array of Independent Drives (or Disks), also known as Redundant Array of Inexpensive Drives (or Disks), (RAID) is an umbrella term for data storage schemes that divide and/or replicate data among multiple hard drives. RAID can be designed to provide increased data reliability or increased I/O performance, or both.

A number of standard schemes have evolved which are referred to as levels. There were five RAID levels originally conceived, but many more variations have evolved, notably several nested levels and many non-standard levels (mostly proprietary).

Overview

RAID combines physical hard disks into a single logical unit either by using special hardware or software. Hardware solutions often are designed to present themselves to the attached system as a single hard drive and the operating system is unaware of the technical workings. Software solutions are typically implemented in the operating system, and again would present the RAID drive as a single drive to applications.

There are three key concepts in RAID: mirroring, the copying of data to more than one disk; striping, the splitting of data across more than one disk; and error correction, where redundant data is stored to allow problems to be detected and possibly fixed (known as fault tolerance). Different RAID levels use one or more of these techniques, depending on the system requirements. The main aims of using RAID are to improve reliability, important for protecting information that is critical to a business, for example a database of customer orders; or where speed is important, for example a system that delivers on-demand TV programs to many viewers.

The configuration affects reliability and performance in different ways. The problem with using more disks is that it is more likely that one will go wrong, but by using error checking the total system can be made more reliable by being able to survive and repair the failure. Basic mirroring can speed up reading data as a system can read different data from both the disks, but it may be slow for writing if it insists that both disks must confirm that the data is correctly written. Striping is often used for performance, where it allows sequences of data to be read off multiple disks at the same time. Error checking typically will slow the system down as data needs to be read from several places and compared. The design of RAID systems is therefore a compromise and understanding the requirements of a system is important. Modern disk arrays typically provide the facility to select the appropriate RAID configuration.

RAID systems can be designed to keep working when there is failure - disks can be hot swapped and data recovered automatically while the system keeps running. Other systems have to be shut down while the data is recovered. RAID is often used in high availability systems, where it is important that the system keeps running as much of the time as possible.

RAID is traditionally used on servers, but can be also used on workstations. The latter is especially true in storage-intensive computers such as those used for video and audio editing.

A quick summary of the most commonly used RAID levels:

• RAID 0: striped set (minimum 2 disks) without parity. Provides improved performance and additional storage but no fault tolerance. Any disk failure destroys the array, which becomes more likely with more disks in the array. The reason a single disk failure destroys the entire array is because when data is written to a RAID 0 drive, the data is broken into fragments. The number of fragments is dictated by the number of disks in the drive. The fragments are written to their respective disks simultaneously on the same sector. This allows smaller sections of the entire chunk of data to be read off the drive in parallel, giving this type of arrangement huge bandwidth. When one sector on one of the disks fails, however, the corresponding sector on every other disk is rendered useless because part of the data is now corrupted. RAID 0 does not implement error checking so any error is unrecoverable. More disks in the array mean higher bandwidth, but greater risk of data loss.

• RAID 1: mirrored set (minimum 2 disks) without parity. Provides fault tolerance from disk errors and single disk failure. Increased read performance occurs when using a multi-threaded operating system that supports split seeks, very small performance reduction when writing. Array continues to operate so long as at least one drive is functioning.

• RAID 3 and RAID 4: striped set (minimum 3 disks) with dedicated parity, the parity bits represent a memory location each, they have a value of 0 or 1, whether the given memory location they represent, is empty or full, thus enhancing the speed of read and write. This mechanism provides an improved performance and fault tolerance similar to RAID 5, but with a dedicated parity disk rather than rotated parity stripes. The single disk is a bottle-neck for writing since every write requires updating the parity data. One minor benefit is the dedicated parity disk allows the parity drive to fail and operation will continue without parity or performance penalty.

• RAID 5: striped set (minimum 3 disks) with distributed parity. Distributed parity requires all but one drive to be present to operate; drive failure requires replacement, but the array is not destroyed by a single drive failure. Upon drive failure, any subsequent reads can be calculated from the distributed parity such that the drive failure is masked from the end user. The array will have data loss in the event of a second drive failure and is vulnerable until the data that was on the failed drive is rebuilt onto a replacement drive.

• RAID 6: striped set (minimum 4 disks) with dual distributed parity. Provides fault tolerance from two drive failures; array continues to operate with up to two failed drives. This makes larger RAID groups more practical, especially for high availability systems. As drives grow in size, they become more prone to error and exposure to failure during fixing, a single drive may be 1 Terabyte in size. Single parity RAID levels are vulnerable to data loss until the failed drive is rebuilt: the larger the drive, the longer the rebuild will take. With dual parity, it gives time to rebuild the array by recreating a failed drive with the ability to sustain failure on another drive in the same array.

RAID implementations

Hardware and/or software RAID based solutions

The distribution of data across multiple drives can be managed either by dedicated hardware or by software. Additionally, there are hybrid RAIDs that are partially software and hardware-based solutions.

Software RAID

Software implementations are now provided by many operating systems. A software layer sits above the (generally block-based) disk device drivers and provides an abstraction layer between the logical drives (RAID arrays) and physical drives. Most common levels are RAID 0 (striping across multiple drives for increased space and performance) and RAID 1 (mirroring two drives), followed by RAID 1+0, RAID 0+1, and RAID 5 (data striping with parity).

Since the software must run on a host server attached to storage, the processor (as mentioned above) on that host must dedicate processing time to run the RAID software. Like hardware-based RAID, if the server experiences a hardware failure, the attached storage could be inaccessible for a period of time.

Software implementations, especially LVM-like, can allow RAID arrays to be created from partitions rather than entire physical drives. For instance, Novell NetWare allows you to divide an odd number of disks into two partitions per disk, mirror partitions across disks and stripe a volume across the mirrored partitions to emulate a RAID 1E configuration. Using partitions in this way is only really helpful for increasing performance. Especially when drive sizes are unequal, it actually impairs reliability of a system. If, for example, a RAID 5 array is composed of four drives 250 + 250 + 250 + 500 GB, with a 500 GB drive split into two 250 GB partitions, a failure of this drive will remove two partitions from the array, causing all of the data held on it to be lost.

Hardware RAID

A hardware implementation of RAID requires at a minimum a special-purpose RAID controller. On a desktop system, this may be a PCI expansion card, or might be a capability built in to the motherboard. Any drives may be used - IDE/ATA, SATA, SCSI, SSA, Fibre Channel, sometimes even a combination thereof. In a large environment the controller and disks may be placed outside of a physical machine, in a stand alone disk enclosure. The using machine can be directly attached to the enclosure in a traditional way, or connected via SAN. The controller hardware handles the management of the drives, and performs any parity calculations required by the chosen RAID level.

Most hardware implementations provide a read/write cache which, depending on the I/O workload, will improve performance. In most systems write cache may be non-volatile (e.g. battery-protected), so pending writes are not lost on a power failure.

Hardware implementations provide guaranteed performance, add no overhead to the local CPU complex and can support many operating systems, as the controller simply presents a logical disk to the operating system.

Hardware implementations also typically support hot swapping, allowing failed drives to be replaced while the system is running.

Universal Serial Bus

Universal Serial Bus (USB) is a serial bus standard to interface devices. A major component in the legacy-free PC, USB was designed to allow peripherals to be connected using a single standardized interface socket, to improve plug-and-play capabilities by allowing devices to be connected and disconnected without rebooting the computer (hot swapping). Other convenient features include powering low-consumption devices without the need for an external power supply and allowing some devices to be used without requiring individual device drivers to be installed.

USB is intended to help retire all legacy serial and parallel ports but suffers from large overhead. USB can connect computer peripherals such as mouse devices, keyboards, PDAs, gamepads and joysticks, scanners, digital cameras and printers. For many of those devices USB has become the standard connection method. USB is also used extensively to connect non-networked printers; USB simplifies connecting several printers to one computer. USB was originally designed for personal computers, but it has become commonplace on other devices such as PDAs and video game consoles. In 2004, there were about 1 billion USB devices in the world.

Overview

Up to 127 devices, including the hub devices may be connected to a single host controller. Modern computers often have several host controllers, allowing a very large number of USB devices to be connected.

In USB terminology there are many individual devices that are referred to as functions, because each individual physical device may actually host several functions, such as a webcam with a built-in microphone. Functions are linked in series through hubs. The hubs are special-purpose devices that are not considered functions. There always exists one hub known as the root hub, which is attached directly to the host controller.

When a device is first connected, the host reads a mandatory descriptor of the device, and loads the device driver it needs. When a function or hub is attached to the host controller through any hub on the bus, it is given a unique 7 bit address on the bus by the host controller, which essentially concludes the process called "enumeration". The host controller then polls the bus for traffic, usually in a round-robin fashion, so no function can transfer any data on the bus without explicit request from the host controller.

Device classes

Devices that attach to the bus can be full-custom devices requiring a full-custom device driver to be used, or may belong to a device class. These classes define an expected behavior in terms of device and interface descriptors so that the same device driver may be used for any device that claims to be a member of a certain class. An operating system is supposed to implement all device classes so as to provide generic drivers for any USB device. Device classes are decided upon by the Device Working Group of the USB Implementers Forum.

Example device classes include:

• 0x01: USB Audio Device class, USB headsets, external sound cards, etc.

• 0x03: USB Human Interface Device class ("HID"), keyboards, mice, etc.

• 0x06: Power

• 0x07: Printer etc

• 0x08: USB Mass Storage Device class used for USB flash drives, memory card readers, digital audio players etc.

• 0x09: USB hubs.

• 0x0B: Smart Card readers.

• 0x0E: USB Video Device class, webcam-like devices, motion image capture devices.

• 0xE0: Wireless controllers, for example Bluetooth dongles.

• 0xFF: Vendor Specific

USB mass-storage

USB implements connections to storage devices using a set of standards called the USB mass storage device class (referred to as MSC or UMS). This was initially intended for traditional magnetic and optical drives, but has been extended to support a wide variety of devices, particularly flash drives, which are replacing floppy disks for data transport. Though some computers are capable of booting off of USB Mass Storage devices, USB is not intended to be a primary bus for a computer's internal storage: buses such as ATA (IDE), Serial ATA (SATA), and SCSI fulfill that role.

However, USB has one important advantage in that it is possible to install and remove devices without opening the computer case, making it useful for external drives. Originally conceived and still used today for optical storage devices (CD-RW drives, DVD drives, etc.), a number of manufacturers offer external portable USB hard drives, or empty enclosures for drives, that offer performance comparable to internal drives. These external drives usually contain a translating device that interfaces a drive of conventional technology (IDE, ATA, SATA, ATAPI, or even SCSI) to a USB port. Functionally, the drive appears to the user just like another internal drive. Other competing standards that allow for external connectivity are eSATA and Firewire.

Human-interface devices (HIDs)

Mice and keyboards are frequently fitted with USB connectors, but because most PC motherboards still retain PS/2 connectors for the keyboard and mouse as of 2007, they are often supplied with a small USB-to-PS/2 adaptor, allowing usage with either USB or PS/2 interface. There is no logic inside these adaptors: they make use of the fact that such HID interfaces are equipped with controllers that are capable of serving both the USB and the PS/2 protocol, and automatically detect which type of port they are plugged in to. Joysticks, keypads, tablets and other human-interface devices are also progressively migrating from MIDI, PC game port, and PS/2 connectors to USB.

USB signaling

USB supports three data rates:

• A Low Speed (1.0) rate of 1.5 Mbit/s (192 KB/s) that is mostly used for Human Interface Devices (HID) such as keyboards, mice, and joysticks.

• A Full Speed (1.1) rate of 12 Mbit/s (1.5 MB/s). Full Speed was the fastest rate before the USB 2.0 specification and many devices fall back to Full Speed. Full Speed devices divide the USB bandwidth between them in a first-come first-served basis and it is not uncommon to run out of bandwidth with several isochronous devices. All USB Hubs support Full Speed.

• A Hi-Speed (2.0) rate of 480 Mbit/s (60 MB/s).

Experimental data rate:

• A Super-Speed (3.0) rate of 4.8 Gbit/s (600 MB/s). The USB 3.0 specification will be released by Intel and its partners in mid 2008 according to early reports from cnet news. According to Intel, bus speeds will be 10 times faster than USB 2.0 due to the inclusion of a fiber optic link that works with traditional copper connectors. Products using the 3.0 specification are likely to arrive in 2009 or 2010.

USB signals are transmitted on a twisted pair data cable. These collectively use half-duplex differential signaling to combat the effects of electromagnetic noise on longer lines. Transmitted signal levels are 0.0–0.3 volts for low and 2.8–3.6 volts for high in Full Speed and Low Speed modes, and +-400mV in High Speed (HS) mode.

USB uses a special protocol to negotiate the High Speed mode called "chirping". In simplified terms, a device that is HS capable always connects as a FS device first, but after receiving a USB RESET, if the host (or hub) is also HS capable, it returns signals letting the device know that it will operate at High Speed.

Clock tolerance is 480.00 Mbit/s ±500ppm, 12.000 Mbit/s ±2500ppm, 1.50 Mbit/s ±15000ppm.

Though Hi-Speed devices are commonly referred to as "USB 2.0" and advertised as "up to 480 Mbit/s", not all USB 2.0 devices are Hi-Speed. The actual throughput currently (2006) attained with real devices is about half of the full theoretical (60 MB/s) data throughput rate. Most hi-speed USB devices typically operate at much slower speeds, often about 3 MB/s overall, sometimes up to 10-20 MB/s.

Usability

• It is difficult to incorrectly attach a USB connector. Connectors cannot be plugged-in upside down, and it is clear from the appearance and kinesthetic sensation of making a connection when the plug and socket are correctly mated. However, it is not obvious at a glance to the inexperienced user (or to a user without sight of the installation) which way around the connector goes, so it is often necessary to try both ways.

• Only a moderate insertion/removal force is needed (and specified). USB cables and small USB devices are held in place by the gripping force from the receptacle (without the need for the screws, clips, or thumbturns other connectors require). The force needed to make or break a connection is modest, allowing connections to be made in awkward circumstances or by those with motor disabilities.

Compatibility

• Two-way communication is also possible. In general, cables have only plugs, and hosts and devices have only receptacles: hosts having type-A receptacles and devices type-B. Type-A plugs only mate with type-A receptacles, and type-B with type-B. However, an extension to USB called USB On-The-Go allows a single port to act as either a host or a device — chosen by which end of the cable plugs into the socket on the unit. Even after the cable is hooked up and the units are talking, the two units may "swap" ends under program control. This facility targets units such as PDAs where the USB link might connect to a PC's host port as a device in one instance, yet connect as a host itself to a keyboard and mouse device in another instance.

Types of USB connectors

[pic]

USB (Type A and B) Connectors

Different types of USB plugs: from left to right, micro USB, mini USB, B-type, female A-type, A-type

USB compared with FireWire

USB was originally seen as a complement to FireWire (IEEE 1394), which was designed as a high-speed serial bus which could efficiently interconnect peripherals such as hard disks, audio interfaces, and video equipment. USB originally operated at a far lower data rate and used much simpler hardware, and was suitable for small peripherals such as keyboards and mice.

The most significant technical differences between FireWire and USB include the following:

• USB uses a "speak-when-spoken-to" protocol; peripherals cannot communicate with the host unless the host specifically requests communication. A FireWire device can communicate with any other node at any time, subject to network conditions.

• A USB network relies on a single host at the top of the tree to control the network. In a FireWire network, any capable node can control the network.

These and other differences reflect the differing design goals of the two buses: USB was designed for simplicity and low cost, while FireWire was designed for high performance, particularly in time-sensitive applications such as audio and video. Although similar in theoretical maximum transfer rate, in real-world use, especially for high-bandwidth use such as external hard-drives, FireWire 400 generally has a significantly higher throughput than USB 2.0 Hi-Speed The newer FireWire 800 standard is twice as fast as FireWire 400 and outperforms USB 2.0 Hi-Speed both theoretically and practically.

What is USB 3.0 (aka. SuperSpeed USB)?

USB 3.0 is the next major revision of the ubiquitous Universal Serial Bus, created in 1996 by a consortium of companies led by Intel to dramatically simplify the connection between host computer and peripheral devices. Fast forwarding to 2009, USB 2.0 has been firmly entrenched as the de-facto interface standard in the PC world for years (with about 6 billion devices sold), and yet still the need for more speed by ever faster computing hardware and ever greater bandwidth demands again drive us to where a couple of hundred megabits per second is just not fast enough.

In 2007, Intel demonstrated SuperSpeed USB at the Intel Developer Forum. Version 1.0 of the USB 3.0 (confusing, isn't it?) specification was completed on November 17, 2008. As such, the USB Implementers Forum (USB-IF) has taken over managing the specifications and publishes the relevant technical documents necessary to allow the world of developers and hardware manufacturers to begin to develop products around the USB 3.0 protocol.

In a nutshell, USB 3.0 promises the following:

• Higher transfer rates (up to 4.8 Gbps)

• Increased maximum bus power and increased device current draw to better accommodate power-hungry devices

• New power management features

• Full-duplex data transfers and support for new transfer types

• New connectors and cables for higher speed data transfer...although they are backwards compatible with USB 2.0 devices and computers (more on this later)

Isn't USB 2.0 fast enough?

Well, yes and no. USB 2.0 for many applications provides sufficient bandwidth for a variety of devices and hubs to be connected to one host computer. However, with today's ever increasing demands placed on data transfers with high-definition video content, terrabyte storage devices, high megapixel count digital cameras, and multi-gigabyte mobile phones and portable media players, 480Mbps is not really fast anymore. Furthermore, no USB 2.0 connection could ever come close to the 480Mbps theoretical maximum throughput, making data transfer at around 320 Mbps - the actual real-world maximum. Similarly, USB 3.0 connections will never achieve 4.8 Gbps, but even 50% of that in practice is almost a 10x improvement over USB 2.0.

How does USB 3.0 achieve the extra performance?

USB 3.0 achieves the much higher performance by way of a number of technical changes. Perhaps the most obvious change is an additional physical bus that is added in parallel with the existing USB 2.0 bus. This means that where USB 2.0 previously had 4 wires (power, ground, and a pair for differential data), USB 3.0 adds 4 more for two pairs of differential signals (receive and transmit) for a combined total of 8 connections in the connectors and cabling. These extra two pairs were necessary to support the SuperSpeed USB target bandwidth requirements, because the two wire differential signals of USB 2.0 were not enough.

Furthermore, the signaling method, while still host-directed, is now asynchronous instead of polling. USB 3.0 utilizes a bi-directional data interface rather than USB 2.0's half-duplex arrangement, where data can only flow in one direction at a time. Without getting into any more technical mumbo jumbo, this all combines to give a ten-fold increase in theoretical bandwidth, and a welcome improvement noticeable by anyone when SuperSpeed USB products hit the market.

What other improvements does USB 3.0 provide?

The enhancements to SuperSpeed USB are not just for higher data rates, but for improving the interaction between device and host computer. While the core architectural elements are inherited from before, several changes were made to support the dual bus arrangement, and several more are notable for how users can experience the improvement that USB 3.0 makes over USB 2.0:

• More power when needed

o 50% more power is provided for unconfigured or suspended devices (150 mA up from 100 mA), and 80% more power is available for configured devices (900 mA up from 500 mA). This means that more power-hungry devices could be bus powered, and battery powered devices that previously charged using bus power could potentially charge more quickly.

o A new Powered-B receptable is defined with two extra contacts that enable a devices to provide up to 1000 mA to another device, such as a Wireless USB adapter. This eliminates the need for a power supply to accompany the wireless adapter...coming just a bit closer to the ideal system of a wireless link without wires (not even for power). In regular wired USB connections to a host or hub, these 2 extra contacts are not used.

• Less power when it's not needed

Power efficiency was a key objective in the move to USB 3.0. Some examples of more efficient use of power are:

o Link level power management, which means either the host computer or the device can initiate a power savings state when idle

o The ability for links to enter progressively lower power management states when the link partners are idle

o Continuous device polling is eliminated

o Broadcast packet transmission through hubs is eliminated

o Device and individual function level suspend capabilities allow devices to remove power from all, or portions of their circuitry not in use

• Streaming for bulk transfers is supported for faster performance

• Isochronous transfers allows devices to enter low power link states between service intervals

• Devices can communicate new information such as their latency tolerance to the host, which allows better power performance

To paint an accurate picture, not everything in USB 3.0 is a clear improvement. Cable length, for one, is expected to have a significant limitation when used in applications demanding the highest possible throughput. Although maximum cable length is not specified in the USB 3.0 specification, the electrical properties of the cable and signal quality limitations may limit the practical length to around 3 metres when multi-gigabit transfer rates are desired. This length, of course, can be extended through the use of hubs or signal extenders.

Additionally, some SuperSpeed USB hardware, such as hubs, may always be more expensive than their USB 2.0 counterparts. This is because by definition, a SuperSpeed hub contains 2 hubs: one that enumerates as a SuperSpeed hub, and a second one that enumerates as a regular high-speed hub. Until the USB hub silicon becomes an integrated SuperSpeed USB + Hi-Speed USB part, there may always be a significant price difference.

Some unofficial discussion has surfaced on the web with respect to fiber-optic cabling for longer cable length with USB 3.0. The specification makes no mention of optical cabling, so we conclude that this will be defined in a future spec revision, or left to 3rd party companies to implement cable extension solutions for SuperSpeed USB.

Will my existing peripherals still work? How will they co-exist?

The good news is that USB 3.0 has been carefully planned from the start to peacefully co-exist with USB 2.0. First of all, while USB 3.0 specifies new physical connections and thus new cables to take advantage of the higher speed capability of the new protocol, the connector itself remains the same rectangular shape with the four USB 2.0 contacts in the exact same location as before. Five new connections to carry receive and transitted data independently are present on USB 3.0 cables and only come into contact when mated with a proper SuperSpeed USB connection.

Where are those SuperSpeed USB 3.0 products?

USB 3.0 silicon such as USB host controllers, peripheral chipsets and hubs compliant with the SuperSpeed bus have arrived in the latter half of 2009. Since then, a handful of external hard drives, flash drives, storage docks, Blu-ray optical drives, high-end notebooks, and host adapters in both PCI Express and ExpressCard have begun appearing on retail shelves. Other companies have shown their plans to roll out solid-state drives and RAID. DisplayLink also revealed plans to ship USB 3.0-compliant USB video silicons by Q4 2010.

It is important to note that NEC is the only fab to produce xHCI USB 3.0 host silicons as of this writing (March 2010). Until Intel, nVidia and AMD start bundling USB 3.0 as part of their motherboard chipset, companies interested in equipping USB 3.0 on their systems will have to source from NEC for the chipsets.

What is the future for USB 2.0?

For at least the next five years, we do not see the market for USB 2.0 devices of all types to dwindle. High-bandwidth devices, such as video cameras or storage devices will likely be the first to migrate to SuperSpeed USB, but cost considerations, which in this industry are mainly driven by demand and volume, will restrict USB 3.0 implementation to higher-end products.

By 2010, computer motherboards should start to come equipped with USB 3.0 ports supplementing USB 2.0 ports. USB 3.0 adapter cards will likely play a large role in driving the installed base of USB 3.0 ports up, but as SuperSpeed-enabled ports become standard on new PCs, device manufacturers will be further motivated to migrate to the new standard.

In time, USB 2.0 may be phased out as was USB 1.1, but for now and the foreseeable future, USB 2.0 isn't going anywhere.

What operating systems support USB 3.0?

At the SuperSpeed Developers Conference in November 2008, Microsoft announced that Windows 7 would have USB 3.0 support, perhaps not on its immediate release, but in a subsequent Service Pack or update. It is not out of the question to think that following a successful release of USB 3.0 support in Windows 7, SuperSpeed support would trickle down to Vista. Microsoft has confirmed this by stating that most of their partners share the opinion that Vista should also support USB 3.0.

SuperSpeed support for Windows XP is unknown at this point. Given that XP is a seven year old operating system, the likelihood of this happening is remote, as Microsoft in our opinion, will have to focus on the biggest bang for the buck applications.

With the open-source community behind it, Linux will most definitely support USB 3.0 once the xHCI specification is made public. Currently available under non-disclosure agreement in version 0.95 (a draft specification), organizations are forbidden to ship code because it might reveal or imply what is in the specification. Once that hurdle is out of the way, the Linux USB stack would have to be updated to add support for USB 3.0 details such as bus speed, power management, and a slew of other significant changes detailed in the USB 3.0 specification.

As is customary, Apple remains silent on the issue of SuperSpeed USB support in MacOS X. Our opinion is that if USB 3.0 realizes the promise of plug and play simplicity like USB 2.0 with dramatically increased speeds, the market for SuperSpeed devices will take off, and Apple will follow the trend. Whether or not this signals a threat to Firewire is not known, but you can be sure that Apple will need to support SuperSpeed if the rest of the industry adopts this interface standard.

Given the iterative nature of any software release, USB 3.0 O/S support will come in stages and phases, where initial support may be buggy, slow, or lacking in some features. Over time, these bugs will be ironed out, but expect some growing pains as systems migrate and the development teams struggle to catch up to the high expectations of the computing community at large. We will get there, but it will take time. Anyone remember how buggy and unstable USB support was in the MacOS in all versions of OS 8 and OS 9 before OS X 10.2 arrived?

What new applications does USB 3.0 enable?

In a nutshell, any high-bandwidth device that works with USB 2.0 will become better if updated with USB 3.0 support. At the moment, devices that tax the throughput of USB 2.0 include:

• External hard drives - capable of more than twice the throughput available from USB 2.0, not to mention bus-powered portable drives that require non-compliant Y-cables to get the current they require for reliable operation

• High resolution webcams, video surveillance cameras

• Video display solutions, such as DisplayLink USB video technology

• Digital video cameras and digital still cameras with USB interface

• Multi-channel audio interfaces

• External media such as Blu-Ray drives

High end flash drives can also push USB 2.0 pretty hard, and oftentimes if multiple devices are connected via hub, throughput will suffer.

USB 3.0 opens up the laneways and provides more headroom for devices to deliver a better overall user experience. Where USB video was barely tolerable previously (both from a maximum resolution, latency, and video compression perspective), it's easy to imagine that with 5-10 times the bandwidth available, USB video solutions should work that much better. Single-link DVI requires almost 2Gbps throughput. Where 480Mbps was limiting, 5Gbps is more than promising.

With its promised 4.8Gbps speed, the standard will find its way into some products that previously weren't USB territory, like external RAID storage systems. (Though, there are already plenty of USB-only RAID solutions (e.g. LaCie HDD Max, WD My Book Mirror despite being limited by the interface.)

How does USB 3.0 compare to competing interfaces (i.e. eSATA, FireWire 3200, ExpressCard 2.0)?

Firewire has long been the "forgotten" other mass market, high-speed interface standard. Previously available in Firewire 400 or 800 flavors, it has gradually fallen in popularity as USB 2.0 has surged. Apple, the inventor of the original IEEE 1394 "Firewire" standard, has repeatedly sent mixed messages with the ditching of Firewire first from iPods, and more recently from the mainstream MacBook laptops (except for the lowest-end MacBook, oddly enough).

In late 2007, the 1394 Trade Association announced Firewire 3200, called "S3200", that builds upon the existing Firewire 800 standard that was released in 2002. Utilizing the very same connectors and cabling that is required for Firewire 800, S3200 is basically a drop-in replacement once the internal system components are updated in devices. To date, S3200 has not gained much traction, even in traditional Firewire markets such as digital video.

Firewire's main claim to fame is that it is a highly efficient peer-to-peer, full-duplex, non-polling data communications protocol with very low overhead. Firewire delivers much higher actual throughput than USB 2.0, and can achieve much closer to its theoretical 800Mbps data rate than USB. Where Firewire 800 can deliver sustained data transfers of around 90MB/s, USB 2.0 hovers more around 40MB/s.

It remains to be seen what impact S3200 will have on the computing landscape.

eSATA, or External SATA, was brought to market in 2004 as a consumer interface targetted directly at an external storage market crowded with USB 2.0 and Firewire solutions. It successfully address the issue of the interface bottleneck, and allowed fast hard drives to fully realize their performance potential when located external to a server or PC. eSATA supports a data rate of 3.2Gbps, which is more than enough for the fastest hard drives, which can transfer about 120MB/s, easily better than USB 2.0 and significantly better than Firewire 800.

eSATA is not without drawbacks, however. Cable length is limited to a mere 2m, it cannot supply power to devices connected on the eSATA bus, and the connectors are neither small nor terribly suitable for consumer devices where aesthetics are important. Over the last several years, eSATA has steadily eroded both USB and Firewire market share in the data storage space, although its applications are limited, and really not well-suited to the portable device market.

ExpressCard 2.0 was released practically the same day as the USB 3.0 specification (November 2008) and promises to significantly enhance the ExpressCard standard for the increased speed requirements of today's mobile technologies. Closely tied to both the PCI Express and USB 3.0 specifications, ExpressCard 2.0 supports a variety of applications involving high throughput data transfer and streaming. Maintaining backwards compatibility with the original ExpressCard specification, the hot-pluggable interface standard for I/O expansion in smaller form-factor systems will by definition co-exist with the world of USB 3.0 devices.

FireWire

[pic]

The 6-pin and 4-pin FireWire Connectors

FireWire is Apple Inc.'s brand name for the IEEE 1394 interface (although the 1394 standard also defines a backplane interface). It is also known as i.Link (Sony’s name). It is a personal computer (and digital audio/digital video) serial bus interface standard, offering high-speed communications and isochronous real-time data services. IEEE 1394 has been adopted as the High Definition Audio-Video Network Alliance (HANA) standard connection interface for A/V (audio/visual) component communication and control.

Almost all modern digital camcorders have included this connection since 1995. Many computers intended for home or professional audio/video use have built-in FireWire ports including all Apple, Sony laptop computers and most Dell and HP models currently produced. It is also widely available on retail motherboards for do-it-yourself PCs, alongside USB. FireWire was used with initial models of Apple's iPod, but later models eliminated FireWire support in favor of USB due to space constraints and for wider compatibility.

FireWire is Apple Inc.'s name for the IEEE 1394 High Speed Serial Bus. It was initiated by Apple and developed by the IEEE P1394 Working Group, largely driven by contributors from Apple. Apple intended FireWire to be a serial replacement for the parallel SCSI (Small Computer System Interface) bus while also providing connectivity for digital audio and video equipment. Sony's implementation of the system is known as i.Link, and uses only the four signal pins, discarding the two pins that provide power to the device in favor of a separate power connector on Sony's i.Link products.

The system is commonly used for connection of data storage devices and DV (digital video) cameras, but is also popular in industrial systems for machine vision and professional audio systems. It is used instead of the more common USB due to its faster effective speed, higher power-distribution capabilities, and because it does not need a computer host. Perhaps more importantly, FireWire makes full use of all SCSI capabilities and, compared to USB 2.0 Hi-Speed, has higher sustained data transfer rates, especially on Apple Mac OS X (with more varied results on Windows, presumably since USB2 is Intel's answer to Firewire on Windows machines), a feature especially important for audio and video editors.

However, the small royalty that Apple Inc. and other patent holders have initially demanded from users of FireWire (US$0.25 per end-user system) and the more expensive hardware needed to implement it (US$1–$2) has prevented FireWire from displacing USB in low-end mass-market computer peripherals where cost of product is a major constraint.

Technical specifications

FireWire can connect together up to 63 peripherals. It allows peer-to-peer device communication, such as communication between a scanner and a printer, to take place without using system memory or the CPU. FireWire also supports multiple hosts per bus. It is designed to support Plug-and-play and hot swapping. Its six-wire cable is more flexible than most Parallel SCSI cables and can supply up to 45 watts of power per port at up to 30 volts, allowing moderate-consumption devices to operate without a separate power supply. As noted earlier, the Sony-branded i.Link usually omits the power wiring of the cables and uses a 4-pin connector. Power is provided by a separate power adapter for each device.

Operating system support

Full support for IEEE 1394a and 1394b is available for FreeBSD, Linux, Haiku OS and Apple Mac OS 8.6 through Mac OS X operating systems. Microsoft Windows XP supports 1394a and 1394b, but as of Service Pack 2, every FireWire device will only run at S100 (100 Mbit/second) speed. A hotfix download is available from Microsoft that, with a simple registry modification, enables devices that run at S400 or S800 speeds to operate at their rated speed. Some FireWire hardware manufacturers also provide custom device drivers that replace the Microsoft OHCI host adapter driver stack, enabling S800-capable devices to run at full 800Mb/s transfer rates. Microsoft Windows Vista currently supports only 1394a, with 1394b support coming later in a service pack.

Cable system support

Cable TV providers (in the US, with digital systems) must, upon request of a customer, provide a high-definition capable cable box with a functional FireWire interface. This applies only to customers leasing high-definition capable cable boxes from said cable provider after April 1, 2004. The interface can be used to display or record Cable TV, including HDTV programming.

FireWire 400 can transfer data between devices at 100, 200, or 400 Mbit/s data rates (the actual transfer rates are 98.304, 196.608, and 393.216 Mbit/s, i.e. 12.288, 24.576 and 49.152 MBytes per second respectively). These different transfer modes are commonly referred to as S100, S200, and S400.

Cable length is limited to 4.5 meters (about 15 ft), although up to 16 cables can be daisy chained using active repeaters, external hubs, or internal hubs often present in FireWire equipment. The S400 standard limits any configuration's maximum cable length to 72 meters. The 6-pin connector is commonly found on desktop computers, and can supply the connected device with power. A 4-pin version is used on many laptops (although some use the 6-pin powered connector, particularly those made by Apple) and small FireWire devices and does not have any power connectors, although it is fully compatible with 6-pin interfaces.

Although high-speed USB 2.0 runs at a higher signaling rate (480 Mbit/s), typical PC-hosts rarely exceed sustained transfers of 35MB/sec, with 30MB/sec being more typical. (The theoretical limit for a USB 2 high-speed bulk transfer is 53.125MB/sec.) This is likely due to USB's reliance on the host-processor to manage low-level USB protocol, whereas Firewire automates the same tasks in the interface hardware. For example, the firewire host interface supports memory-mapped devices, which allows high-level protocols to run without loading the host CPU with interrupts and buffer-copy operations.

Motherboard Onboard (Integrated) Devices

On some motherboards, circuitry is included to perform some of the common functions normally found on expansion cards. This definitely has its pros and cons, depending on what is included, how they do it, and what your needs are. In general, incorporating built-in circuitry has the advantage of lower cost, and the disadvantage of lower choice and upgradability. In addition, the cost savings of integrated components only translates to real money if you use them! If you end up adding a SoundBlaster and your own video card because you don't like the integrated components, you aren't saving any money at all. And you can't sell the integrated devices to someone else, obviously.

Video

This is one of the most common of the integrated additional controllers, and unfortunately, usually the worst. The video card is one of the more important performance-related features on a PC. If there was no video card slot available it was impossible to install a card to meet your particular needs. Fortunately, most of today’s motherboards provide a slot for an add-on video card with integrated video was included or not.

Older motherboards with integrated video were notoriously difficult to upgrade. Many that had an option (via jumper or BIOS setting) to disable the on-board video actually still had problems with an add-in upgrade video card, and many came with no ability to disable the built-in video at all! On a lower-end machine however, integrated video can save some cost over buying a separate video card, as well as an expansion slot.

Onboard video is sometimes a good option for systems where 3D graphics performance is not a major concern. This option will provide a platform suitable for multimedia and office use, though. If you do take advantage of onboard video, expect it to use about 32 to 128MB of the system RAM for storing video data.

The newest generation of onboard video from Intel and nVidia is finally reaching a good level of performance and features. While they aren't designed for gaming, both of these solutions can handle video acceleration (DVD, MPEG2, etc.) and should provide basic support for Windows Vista's enhanced graphics interface.

If you are looking to get into a top-performing, non-gaming system on a budget, consider using the onboard graphics. Then you can use the cash you would be spending on a video card for more RAM or a faster CPU.

Disabling Onboard/Integrated Video and Installing a Video Card

Disabling this is sometimes necessary to prevent conflicts with AGP, PCI and PCI-E Video cards and to ensure that the accelerator is being detected correctly by the BIOS of the motherboard.

1. Read the documentation that came along with your motherboard first. Because the actual switch for disabling the onboard video could be a physical jumper OR a software BIOS setting, be sure you know the correct jumper (often labeled on the PCB, or how to change settings in your systems' BIOS.

2. Install any devices and drivers for the other video card device, connecting any power molex cables to the power supply of your computer, and the most recent drivers downloaded from the manufacturer's website of your card.

3. Plug in the monitor to the port located on the motherboard first, otherwise you may not get a picture at all.

4. According to your documentation, find out if you need to open (or close) a jumper on the motherboard to disable the video, or if it is a BIOS Setting.

5. Change the jumper (opening or closing it as needed) on the motherboard while the system is powered off and unplugged from the wall, or go into BIOS and change the Primary Video to PCI/AGP based on the card.

6. Often there is also a setting in BIOS on newer boards that allows the user to set either a PCI card or AGP card as the primary video device. Make sure this is set to your video card.

7. Save Settings and Exit BIOS, if you had to change settings here.

8. Power off the system entirely, unplugging it from the wall. Change your Monitor cable from the motherboard connector to the primary Display out on the back of the video card. This should be designated with a 0 or 1 in the event the card has dual-display capability.

9. Turn on your system. If you receive a video signal, congratulations, you're done!

10. If you do not receive a video signal, listen carefully for the normal beep that indicates a successful POST. (Power On Self Test). If there is one beep, turn the system off and plug the cable back into the motherboard display port. If you now receive a video signal, check your BIOS Settings or Jumper settings once again. If you still have difficulties, or you do not receive any signal on either port, then read your documentation on how to reset BIOS or change the jumper back to its original position.

• Always check your documentation (read the manual) for detailed instructions on how to do this.

• Documentation on this process and related troubleshooting can often be found on the manufacturers' website for your motherboard.

• Newer motherboards will often automatically detect and use a video card for the primary display adapter, instead of using the onboard video.

LAN

Integrated network adapters are relatively common on motherboards today. In some ways, this is one of the least offensive of the integrated controllers discussed in this section, provided it emulates a common standard, and it comes with good drivers. Still, with generic adapters selling for $20 or less, paying a premium for a motherboard with this support doesn't make a lot of sense. In addition, if you want to move up to 1000 Mb/second Fast Ethernet, you will probably want a PCI-based network card, and you will be back to the "disable the integrated circuitry, put the new card in, and hope it works" routine that makes integrated video often a nightmare.

Most newer PCs now come with integrated 10/100 Ethernet functionality, and this can be very useful in a network environment (as long as you aren't paying too much extra for it.)

Notes:

The onboard NIC usually has a direct link to the Southbridge thus avoiding the slowdown of going through the PCI bus. We might get better transfers via the onboard NIC than through a separate NIC card on the PCI slot.

With the transfer limitations of mechanical hard drives today, you will not see 1gbps (1000mbps or 125MB/sec) transfer rate between PCs. To get anything close, you'd need to setup a RAM drive.

 

As for PCI slots, 33MHz PCI slots have a transfer rate of 133MB/sec. 100mbps network cards will use 12.5MB/sec maximum of that (100mbps/8 = 12.5MB/sec). Gigabit network cards 1000mbps at full speed will nearly saturate a PCI bus (1000mbps/8 = 125MB/sec).

Disabling Onboard/Integrated LAN and installing a NIC

Typically, this is done by disabling the on-board LAN within the CMOS (also referred to as the BIOS). The CMOS is entered by pressing the DEL or F2 key (or some other combination) as soon as you turn your computer.

Once you are inside the CMOS menu, you may need to go into every sub-menu and look for an option that pertains to your on-board LAN, and then disable it. Once that's finished, exit the CMOS (saving changes), wait for the computer to reboot, and then power down; insert your new NIC (network interface card) in the PCI slot and power on the computer... boot into Windows and the new NIC should be detected. The old one will no longer appear under Device Manager.

Audio

Integrated sound support is fairly common, particularly on retail brand-name machines. Even the lesser clones have figured out by now how to do basic SoundBlaster emulation. However, bear in mind that sound cards use many system resources and can cause many hard-to-diagnose problems when they are of low quality. Most of the integrated devices provide rather basic functionality not approaching the high-end capabilities of the real add-in cards.

Generally speaking: most dedicated soundcards will almost always perform better (both CPU wise and sound quality wise) because there is more hardware devoted for the task at hand (I.E.: producing sound). Comparably: a dedicated sound card has many chips and transistors to create sound, whereas many of the integrated on-board solutions have only 1 chip, few transistors, and often rely on software emulation to produce sound. This causes CPU load and can also degrade the sound experience (and even cause the sound to "stutter").

Why should I get a soundcard when my Motherboard has onboard sound?

Sound chips undoubtedly have come along way since the first on boards, and would be fine for a basic office or home PC, but they still do not stand up to a quality add-in PCI card for many reasons.

Onboard sound chips need to use CPU cycles to process sound. This robs your system of performance. If your sound chip has EAX and a lot of them do these days the issue is compounded usually degrading your performance somewhere in the area of 5-15 FPS in games. This is usually the reported number of FPS that users report getting when they install a PCI sound solution. This is not the worst issue though.

Of course if a user has been using their sound chips for quite a while, the ear gets used to the less than stellar sound quality boasted by onboard sound chips. When a user installs a PCI solution they are literally blown away by the sound quality. They report hearing things in games and music that previously was not noticeable. Once a user is used to a PCI sound card they can hear the deficiencies inherent in onboard sound chips and will never use one again. They sound very cold and sterile. DSP effects are usually overwhelming and washed out. These chips have no qualities usually associated with good sound, so a lot of times the user will not know what they are missing until they hear it. You will need a set of decent speakers to be able to hear the difference however. $10 speakers won't cut it.

Now over the years various companies have released onboard sound chips that have tried to get over these issues such as connecting to the bus in different ways so they don't use CPU cycles, etc. In terms of sound quality they are still lacking. Modern games are production masterpieces with equal attention paid to audio as well as video. Without this hardware you're only getting half of what the game developers wanted you to experience. Sound is definitely an important part of all modern games. Indeed most new games contain EAX, surround sound support and directional audio, all used to make your game more enjoyable. Unfortunately all are limited by the sound hardware you have.

Disabling Onboard/Integrated Audio and installing an Audio Card

Typically, this is done by disabling the on-board sound card within the CMOS (also referred to as the BIOS). The CMOS is entered by pressing the DEL or F2 key (or some other combination) as soon as you turn your computer.

Once you are inside the CMOS menu, you may need to go into every sub-menu and look for an option that pertains to your on-board sound, and then disable it. Once that's finished, exit the CMOS (saving changes), wait for the computer to reboot, and then power down; insert your new sound card in the PCI slot and power on the computer... boot into Windows and the new sound card should be detected. The old one will no longer appear under Device Manager.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download