20Segmenti.it



Light-emitting diode

From Wikipedia, the free encyclopedia

(Redirected from Led)

Jump to: navigation, search

Light-emitting diode

Red, green and blue LEDs of the 5mm type

Type Passive, optoelectronic

Working principle Electroluminescence

Invented Nick Holonyak Jr. (1962)

Electronic symbol

File:LED symbol.svg

Pin configuration Anode and Cathode

This box: view • talk

A light-emitting diode (LED) (pronounced /ˌɛliːˈdiː/[1], or just /lɛd/), is an electronic light source. The LED was discovered in the early 20th century, and introduced as a practical electronic component in 1962. All early devices emitted low-intensity red light, but modern LEDs are available across the visible, ultraviolet and infra red wavelengths, with very high brightness.

LEDs are based on the semiconductor diode. When the diode is forward biased (switched on), electrons are able to recombine with holes and energy is released in the form of light. This effect is called electroluminescence and the color of the light is determined by the energy gap of the semiconductor. The LED is usually small in area (less than 1 mm2) with integrated optical components to shape its radiation pattern and assist in reflection.[2]

LEDs present many advantages over traditional light sources including lower energy consumption, longer lifetime, improved robustness, smaller size and faster switching. However, they are relatively expensive and require more precise current and heat management than traditional light sources.

Applications of LEDs are diverse. They are used as low-energy replacements for traditional light sources in well-established applications such as indicators and automotive lighting. The compact size of LEDs has allowed new text and video displays and sensors to be developed, while their high switching rates are useful in communications technology.

History

[edit] Discoveries and early devices

Oleg Losev created one of the first LEDs in the mid 1920s

Electroluminescence was discovered in 1907 by the British experimenter H. J. Round of Marconi Labs, using a crystal of silicon carbide and a cat's-whisker detector.[3] Russian Oleg Vladimirovich Losev independently created the first LED in the mid 1920s; his research was distributed in Russian, German and British scientific journals,[4][5] but no practical use was made of the discovery for several decades. Rubin Braunstein of the Radio Corporation of America reported on infrared emission from gallium arsenide (GaAs) and other semiconductor alloys in 1955.[6] Braunstein observed infrared emission generated by simple diode structures using gallium antimonide (GaSb), GaAs, indium phosphide (InP), and silicon-germanium (SiGe) alloys at room temperature and at 77 kelvin.

In 1961, experimenters Bob Biard and Gary Pittman working at Texas Instruments,[7] found that GaAs emitted infrared radiation when electric current was applied and received the patent for the infrared LED.

The first practical visible-spectrum (red) LED was developed in 1962 by Nick Holonyak Jr., while working at General Electric Company.[8] Holonyak is seen as the "father of the light-emitting diode".[9] M. George Craford, a former graduate student of Holonyak, invented the first yellow LED and improved the brightness of red and red-orange LEDs by a factor of ten in 1972.[10] In 1976, T.P. Pearsall created the first high-brightness, high efficiency LEDs for optical fiber telecommunications by inventing new semiconductor materials specifically adapted to optical fiber transmission wavelengths.[11]

Up to 1968 visible and infrared LEDs were extremely costly, on the order of US $200 per unit, and so had little practical application.[12] The Monsanto Corporation was the first organization to mass-produce visible LEDs, using gallium arsenide phosphide in 1968 to produce red LEDs suitable for indicators.[12] Hewlett Packard (HP) introduced LEDs in 1968, initially using GaAsP supplied by Monsanto. The technology proved to have major applications for alphanumeric displays and was integrated into HP's early handheld calculators.

[edit] Practical use

This section needs additional citations for verification. Please help improve this article by adding reliable references (ideally, using inline citations). Unsourced material may be challenged and removed. (March 2009)

Some police vehicle lightbars incorporate LEDs.

The first commercial LEDs were commonly used as replacements for incandescent indicators, and in seven-segment displays,[13] first in expensive equipment such as laboratory and electronics test equipment, then later in such appliances as TVs, radios, telephones, calculators, and even watches (see list of signal applications). These red LEDs were bright enough only for use as indicators, as the light output was not enough to illuminate an area. Later, other colors became widely available and also appeared in appliances and equipment. As the LED materials technology became more advanced, the light output was increased, while maintaining the efficiency and the reliability to an acceptable level. The invention and development of the high power white light LED led to use for illumination[14] [15] (see list of illumination applications). Most LEDs were made in the very common 5 mm T1¾ and 3 mm T1 packages, but with increasing power output, it has become increasingly necessary to shed excess heat in order to maintain reliability[16], so more complex packages have been adapted for efficient heat dissipation. Packages for state-of-the-art high power LEDs bear little resemblance to early LEDs.

[edit] Continuing development

Illustration of Haitz's Law. Light output per LED as a function of time, note the logarithmic scale on the axis.

The first high-brightness blue LED was demonstrated by Shuji Nakamura of Nichia Corporation and was based on InGaN borrowing on critical developments in GaN nucleation on sapphire substrates and the demonstration of p-type doping of GaN which were developed by Isamu Akasaki and H. Amano in Nagoya. In 1995, Alberto Barbieri at the Cardiff University Laboratory (GB) investigated the efficiency and reliability of high-brightness LEDs demonstrated a very impressive result by using a transparent contact made of indium tin oxide (ITO) on (AlGaInP/GaAs) LED. The existence of blue LEDs and high efficiency LEDs quickly led to the development of the first white LED, which employed a Y3Al5O12:Ce, or "YAG", phosphor coating to mix yellow (down-converted) light with blue to produce light that appears white. Nakamura was awarded the 2006 Millennium Technology Prize for his invention.[17]

The development of LED technology has caused their efficiency and light output to increase exponentially, with a doubling occurring about every 36 months since the 1960s, in a way similar to Moore's law. The advances are generally attributed to the parallel development of other semiconductor technologies and advances in optics and material science. This trend is normally called Haitz's Law after Dr. Roland Haitz. [18]

In February 2008, Bilkent university in Turkey reported 300 lumens of visible light per watt luminous efficacy (not per electrical watt) and warm light by using nanocrystals [19].

[edit] Technology

Parts of an LED

The inner workings of an LED

I-V diagram for a diode an LED will begin to emit light when the on-voltage is exceeded. Typical on voltages are 2-3 Volt

[edit] Physics

Like a normal diode, the LED consists of a chip of semiconducting material impregnated, or doped, with impurities to create a p-n junction. As in other diodes, current flows easily from the p-side, or anode, to the n-side, or cathode, but not in the reverse direction. Charge-carriers—electrons and holes—flow into the junction from electrodes with different voltages. When an electron meets a hole, it falls into a lower energy level, and releases energy in the form of a photon.

The wavelength of the light emitted, and therefore its color, depends on the band gap energy of the materials forming the p-n junction. In silicon or germanium diodes, the electrons and holes recombine by a non-radiative transition which produces no optical emission, because these are indirect band gap materials. The materials used for the LED have a direct band gap with energies corresponding to near-infrared, visible or near-ultraviolet light.

LED development began with infrared and red devices made with gallium arsenide. Advances in materials science have made possible the production of devices with ever-shorter wavelengths, producing light in a variety of colors.

LEDs are usually built on an n-type substrate, with an electrode attached to the p-type layer deposited on its surface. P-type substrates, while less common, occur as well. Many commercial LEDs, especially GaN/InGaN, also use sapphire substrate.

Most materials used for LED production have very high refractive indices. This means that much light will be reflected back in to the material at the material/air surface interface. Therefore Light extraction in LEDs is an important aspect of LED production, subject to much research and development.

[edit] Efficiency and operational parameters

Typical indicator LEDs are designed to operate with no more than 30–60 milliwatts [mW] of electrical power. Around 1999, Philips Lumileds introduced power LEDs capable of continuous use at one watt [W]. These LEDs used much larger semiconductor die sizes to handle the large power inputs. Also, the semiconductor dies were mounted onto metal slugs to allow for heat removal from the LED die.

One of the key advantages of LED-based lighting is its high efficiency, as measured by its light output per unit power input. White LEDs quickly matched and overtook the efficiency of standard incandescent lighting systems. In 2002, Lumileds made five-watt LEDs available with a luminous efficiency of 18–22 lumens per watt [lm/W]. For comparison, a conventional 60–100 W incandescent lightbulb produces around 15 lm/W, and standard fluorescent lights produce up to 100 lm/W. A reoccurring problem is that efficiency will fall dramatically for increased current. This effect is know as droop and effectively limits the light output of a given LED, increasing heating more than light output for increased current.

In September 2003, a new type of blue LED was demonstrated by the company Cree, Inc. to provide 24 mW at 20 milliamperes [mA]. This produced a commercially packaged white light giving 65 lm/W at 20 mA, becoming the brightest white LED commercially available at the time, and more than four times as efficient as standard incandescents. In 2006 they demonstrated a prototype with a record white LED luminous efficiency of 131 lm/W at 20 mA. Also, Seoul Semiconductor has plans for 135 lm/W by 2007 and 145 lm/W by 2008, which would be approaching an order of magnitude improvement over standard incandescents and better even than standard fluorescents.[20] Nichia Corporation has developed a white light LED with luminous efficiency of 150 lm/W at a forward current of 20 mA.[21]

It should be noted that high-power (≥ 1 W) LEDs are necessary for practical general lighting applications. Typical operating currents for these devices begin at 350 mA. The highest efficiency high-power white LED is claimed[22] by Philips Lumileds Lighting Co. with a luminous efficiency of 115 lm/W (350 mA).

Cree issued a press release on November 19, 2008 about a laboratory prototype LED achieving 161 lumens/watt at room temperature. The total output was 173 lumens, and the correlated color temperature was reported to be 4689 K.[23][unreliable source?]

[edit] Lifetime and failure

Main article: List of LED failure modes

Solid state devices such as LEDs are subject to very limited wear and tear if operated at low currents and at low temperatures. Many of the LEDs produced in the 1970s and 1980s are still in service today. Typical lifetimes quoted are 25000 to 100000 hours but heat and current settings can extend or shorten this time significantly. [24]

The most common symptom of LED (and diode laser) failure is the gradual lowering of light output and loss of efficiency. Sudden failures, although rare, can occur as well. Early red LEDs were notable for their short lifetime. With the development of high power LEDs the devices are subjected to higher junction temperatures and higher current densities than traditional devices. This causes stress on the material and may cause early light output degradation. To quantitatively classify lifetime in a standardized manner it has been suggested to use the terms L75 and L50 which is the time it will take a given LED to reach 75% and 50% light output respectively.[25]

[edit] Colors and materials

Conventional LEDs are made from a variety of inorganic semiconductor materials, the following table shows the available colors with wavelength range, voltage drop and material:

Ultraviolet and blue LEDs

Blue LEDs.

Blue LEDs are based on the wide band gap semiconductors GaN (gallium nitride) and InGaN (indium gallium nitride). They can be added to existing red and green LEDs to produce the impression of white light, though white LEDs today rarely use this principle.

The first blue LEDs were made in 1971 by Jacques Pankove (inventor of the gallium nitride LED) at RCA Laboratories.[27] However, these devices had too little light output to be of much practical use. In the late 1980s, key breakthroughs in GaN epitaxial growth and p-type doping by Isamu Akasaki and Hiroshi Amano (Nagoya, Japan)[28] ushered in the modern era of GaN-based optoelectronic devices. Building upon this foundation, in 1993 high brightness blue LEDs were demonstrated through the work of Shuji Nakamura at Nichia Corporation.[29]

By the late 1990s, blue LEDs had become widely available. They have an active region consisting of one or more InGaN quantum wells sandwiched between thicker layers of GaN, called cladding layers. By varying the relative InN-GaN fraction in the InGaN quantum wells, the light emission can be varied from violet to amber. AlGaN aluminium gallium nitride of varying AlN fraction can be used to manufacture the cladding and quantum well layers for ultraviolet LEDs, but these devices have not yet reached the level of efficiency and technological maturity of the InGaN-GaN blue/green devices. If the active quantum well layers are GaN, as opposed to alloyed InGaN or AlGaN, the device will emit near-ultraviolet light with wavelengths around 350–370 nm. Green LEDs manufactured from the InGaN-GaN system are far more efficient and brighter than green LEDs produced with non-nitride material systems.

With nitrides containing aluminium, most often AlGaN and AlGaInN, even shorter wavelengths are achievable. Ultraviolet LEDs in a range of wavelengths are becoming available on the market. Near-UV emitters at wavelengths around 375–395 nm are already cheap and often encountered, for example, as black light lamp replacements for inspection of anti-counterfeiting UV watermarks in some documents and paper currencies. Shorter wavelength diodes, while substantially more expensive, are commercially available for wavelengths down to 247 nm.[30] As the photosensitivity of microorganisms approximately matches the absorption spectrum of DNA, with a peak at about 260 nm, UV LEDs emitting at 250–270 nm are to be expected in prospective disinfection and sterilization devices. Recent research has shown that commercially available UVA LEDs (365 nm) are already effective disinfection and sterilization devices.[31]

Wavelengths down to 210 nm were obtained in laboratories using aluminium nitride.

While not an LED as such, an ordinary NPN bipolar transistor will emit violet light if its emitter-base junction is subjected to non-destructive reverse breakdown. This is easy to demonstrate by filing the top off a metal-can transistor (BC107, 2N2222 or similar) and biasing it well above emitter-base breakdown (≥ 20 V) via a current-limiting resistor.

[edit] White light

There are two ways of producing high intensity white-light using LEDs. One is to use individual LEDs that emit three primary colors[32] – red, green, and blue, and then mix all the colors to produce white light. The other is to use a phosphor material to convert monochromatic light from a blue or UV LED to broad-spectrum white light, much in the same way a fluorescent light bulb works.

[edit] RGB Systems

Combined spectral curves for blue, yellow-green, and high brightness red solid-state semiconductor LEDs. FWHM spectral bandwidth is approximately 24–27 nm for all three colors.

White light can be produced by mixing differently colored light, the most common method is to use red, green and blue (RGB). Hence the method is called multi-colored white LEDs (sometimes referred to as RGB LEDs). Because its mechanism is involved with sophisticated electro-optical design to control the blending and diffusion of different colors, this approach has rarely been used to mass produce white LEDs in the industry. Nevertheless this method is particularly interesting to many researchers and scientists because of the flexibility of mixing different colors.[33] In principle, this mechanism also has higher quantum efficiency in producing white light.

There are several types of multi-colored white LEDs: di-, tri-, and tetrachromatic white LEDs. Several key factors that play among these different approaches include color stability, color rendering capability, and luminous efficiency. Often higher efficiency will mean lower color rendering, presenting a trade off between the luminous efficiency and color rendering. For example, the dichromatic white LEDs have the best luminous efficiency (120 lm/W), but the lowest color rendering capability. Oppositely although tetrachromatic white LEDs have excellent color rendering capability, they often have poor luminous efficiency. Trichromatic white LEDs are in between, having both good luminous efficiency (>70 lm/W) and fair color rendering capability.

What multi-color LEDs offer is not merely another solution of producing white light, but is a whole new technique of producing light of different colors. In principle, most perceivable colors can be produced by mixing different amounts of three primary colors, and this makes it possible to produce precise dynamic color control as well. As more effort is devoted to investigating this technique, multi-color LEDs should have profound influence on the fundamental method which we use to produce and control light color. However, before this type of LED can truly play a role on the market, several technical problems need to be solved. These certainly include that this type of LED's emission power decays exponentially with increasing temperature,[34] resulting in a substantial change in color stability. Such problem is not acceptable for industrial usage. Therefore, many new package designs aiming to solve this problem have been proposed, and their results are being reproduced by researchers and scientists.

[edit] Phosphor based LEDs

Spectrum of a “white” LED clearly showing blue light which is directly emitted by the GaN-based LED (peak at about 465 nm) and the more broadband Stokes-shifted light emitted by the Ce3+:YAG phosphor which emits at roughly 500–700 nm.

This method involves coating an LED of one color (mostly blue LED made of InGaN) with phosphor of different colors to produce white light, the resultant LEDs are called phosphor based white LEDs. A fraction of the blue light undergoes the Stokes shift being transformed from shorter wavelengths to longer. Depending on the color of the original LED, phosphors of different colors can be employed. If several phosphor layers of distinct colors are applied, the emitted spectrum is broadened, effectively increasing the color rendering index (CRI) value of a given LED.

Phosphor based LEDs have a lower efficiency than normal LEDs due to the heat loss from the Stokes shift and also other phosphor-related degradation issues. However, the phosphor method is still the most popular technique for manufacturing high intensity white LEDs. The design and production of a light source or light fixture using a monochrome emitter with phosphor conversion is simpler and cheaper than a complex RGB system, and the majority of high intensity white LEDs presently on the market are manufactured using phosphor light conversion.

The greatest barrier to high efficiency is the seemingly unavoidable Stokes energy loss. However, much effort is being spent on optimizing these devices to higher light output and higher operation temperatures. For instance, the efficiency can be increased by adapting better package design or by using a more suitable type of phosphor. Philips Lumileds' patented conformal coating process addresses the issue of varying phosphor thickness, giving the white LEDs a more homogeneous white light. With development ongoing, the efficiency of phosphor based LEDs is generally increased with every new product announcement.

Technically the phosphor based white LEDs encapsulate InGaN blue LEDs inside of a phosphor coated epoxy. A common yellow phosphor material is cerium-doped yttrium aluminium garnet (Ce3+:YAG).

White LEDs can also be made by coating near ultraviolet (NUV) emitting LEDs with a mixture of high efficiency europium-based red and blue emitting phosphors plus green emitting copper and aluminium doped zinc sulfide (ZnS:Cu, Al). This is a method analogous to the way fluorescent lamps work. This method is less efficient than the blue LED with YAG:Ce phosphor, as the Stokes shift is larger and more energy is therefore converted to heat, but yields light with better spectral characteristics, which render color better. Due to the higher radiative output of the ultraviolet LEDs than of the blue ones, both approaches offer comparable brightness. Another concern is that UV light may leak from a malfunctioning light source and cause harm to human eyes or skin.

[edit] Other white LEDs

Another method used to produce experimental white light LEDs used no phosphors at all and was based on homoepitaxially grown zinc selenide (ZnSe) on a ZnSe substrate which simultaneously emitted blue light from its active region and yellow light from the substrate.[35]

[edit] Organic light-emitting diodes (OLEDs)

Main article: Organic light-emitting diode

If the emitting layer material of the LED is an organic compound, it is known as an Organic Light Emitting Diode (OLED). To function as a semiconductor, the organic emitting material must have conjugated pi bonds. [36] The emitting material can be a small organic molecule in a crystalline phase, or a polymer. Polymer materials can be flexible; such LEDs are known as PLEDs or FLEDs.

Compared with regular LEDs, OLEDs are lighter, and polymer LEDs can have the added benefit of being flexible. Some possible future applications of OLEDs could be:

* Inexpensive, flexible displays

* Light sources

* Wall decorations

* Luminous cloth

OLEDs have been used to produce visual displays for portable electronic devices such as cellphones, digital cameras, and MP3 players. Larger displays have been demonstrated,[37] but their life expectancy is still far too short (1,000 hours) to be practical[citation needed].

Today, OLEDs operate at substantially lower efficiency than inorganic (crystalline) LEDs.[38]

[edit] Quantum dot LEDs (experimental)

A new technique developed by Michael Bowers, a graduate student at Vanderbilt University in Nashville, involves coating a blue LED with quantum dots that glow white in response to the blue light from the LED. This technique produces a warm, yellowish-white light similar to that produced by incandescent bulbs.[39]

Quantum dots are semiconductor nanocrystals that possess unique optical properties.[40] Their emission color can be tuned from the visible throughout the infrared spectrum. This allows quantum dot LEDs to create almost any color on the CIE diagram. This provides more color options and better color rendering white LEDs. Quantum dot LEDs are available in the same package types as traditional phosphor based LEDs.

and on cell phone keypads. (not shown).

The main types of LEDs are miniature, high power devices and custom designs such as alphanumeric or multi-color.

[edit] Miniature LEDs

Different sized LEDs. 8 mm, 5 mm and 3 mm, with a wooden match-stick for scale.

Main article: Miniature light-emitting diode

These are mostly single-die LEDs used as indicators, and they come in various-sizes from 2 mm to 8 mm, through-hole and surface mount packages. They are usually simple in design, not requiring any separate cooling body.[41] Typical current ratings ranges from around 1 mA to above 20 mA. The small scale set a natural upper boundary on power consumption due to heat caused by the high current density and need for heat sinking.

[edit] High power LEDs

See also: Solid-state lighting and LED lamp

High power LEDs from Philips Lumileds Lighting Company mounted on a 21 mm star shaped base heatsink

High power LEDs (HPLED) can be driven at hundreds of mA (vs. tens of mA for other LEDs), some with more than one ampere of current, and give out large amounts of light. Since overheating is destructive, the HPLEDs must be highly efficient to minimize excess heat; furthermore, they are often mounted on a heat sink to allow for heat dissipation. If the heat from a HPLED is not removed, the device will burn out in seconds.

A single HPLED can often replace an incandescent bulb in a flashlight, or be set in an array to form a powerful LED lamp.

LEDs have been developed by Seoul Semiconductor that can operate on AC power without the need for a DC converter. For each half cycle part of the LED emits light and part is dark, and this is reversed during the next half cycle. The efficiency of HPLEDs is typically 40 lm/W.[42] Some well-known HPLED's in this category are the Lumileds Rebel Led, Osram Opto Semiconductors Golden Dragon and Cree X-lamp. As of November 2008 some HPLEDs manufactured by Cree Inc. now exceed 95 lm/W [43] (e.g. the XLamp MC-E LED chip emitting Cool White light) and are being sold in lamps intended to replace incandescent, halogen, and even fluorescent style lights as LEDs become more cost competitive.

[edit] Application-specific variations

* Flashing LEDs are used as attention seeking indicators without requiring external electronics. Flashing LEDs resemble standard LEDs but they contain an integrated multivibrator circuit inside which causes the LED to flash with a typical period of one second. In diffused lens LEDs this is visible as a small black dot. Most flashing LEDs emit light of a single color, but more sophisticated devices can flash between multiple colors and even fade through a color sequence using RGB color mixing.

Old calculator LED display.

* Bi-color LEDs are actually two different LEDs in one case. It consists of two dies connected to the same two leads but in opposite directions. Current flow in one direction produces one color, and current in the opposite direction produces the other color. Alternating the two colors with sufficient frequency causes the appearance of a blended third color. For example, a red/green LED operated in this fashion will color blend to produce a yellow appearance.

* Tri-color LEDs are two LEDs in one case, but the two LEDs are connected to separate leads so that the two LEDs can be controlled independently and lit simultaneously. A three-lead arrangement is typical with one common lead (anode or cathode).

* RGB LEDs contain red, green and blue emitters, generally using a four-wire connection with one common lead (anode or cathode).

* Alphanumeric LED displays are available in seven-segment and starburst format. Seven-segment displays handle all numbers and a limited set of letters. Starburst displays can display all letters. Seven-segment LED displays were in widespread use in the 1970s and 1980s, but increasing use of liquid crystal displays, with their lower power consumption and greater display flexibility, has reduced the popularity of numeric and alphanumeric LED displays.

[edit] Considerations for use

[edit] Power sources

Main article: LED power sources

The current/voltage characteristics of an LED is similar to other diodes, in that the current is dependent exponentially on the voltage (see Shockley diode equation). This means that a small change in voltage can lead to a large change in current. If the maximum voltage rating is exceeded by a small amount the current rating may be exceeded by a large amount, potentially damaging or destroying the LED. The typical solution is therefor to use constant current power supplies, or driving the LED at a voltage much below the maximum rating. Since few household power sources (batteries, mains) are constant current sources, most LED fixtures must include a power converter.

[edit] Electrical polarity

Main article: Electrical polarity of LEDs

This section needs additional citations for verification. Please help improve this article by adding reliable references (ideally, using inline citations). Unsourced material may be challenged and removed. (March 2009)

As with all diodes, current flows easily from p-type to n-type material.[44] However, no current flows and no light is produced if a small voltage is applied in the reverse direction. If the reverse voltage becomes large enough to exceed the breakdown voltage, a large current flows and the LED may be damaged.

[edit] Advantages

* Efficiency: LEDs produce more light per watt than incandescent bulbs.[45]

* Color: LEDs can emit light of an intended color without the use of color filters that traditional lighting methods require. This is more efficient and can lower initial costs.

* Size: LEDs can be very small (smaller than 2 mm2[46]) and are easily populated onto printed circuit boards.

* On/Off time: LEDs light up very quickly. A typical red indicator LED will achieve full brightness in microseconds.[47] LEDs used in communications devices can have even faster response times.

* Cycling: LEDs are ideal for use in applications that are subject to frequent on-off cycling, unlike fluorescent lamps that burn out more quickly when cycled frequently, or HID lamps that require a long time before restarting.

* Dimming: LEDs can very easily be dimmed either by Pulse-width modulation or lowering the forward current.

* Cool light: In contrast to most light sources, LEDs radiate very little heat in the form of IR that can cause damage to sensitive objects or fabrics. Wasted energy is dispersed as heat through the base of the LED.

* Slow failure: LEDs mostly fail by dimming over time, rather than the abrupt burn-out of incandescent bulbs.[48]

* Lifetime: LEDs can have a relatively long useful life. One report estimates 35,000 to 50,000 hours of useful life, though time to complete failure may be longer.[49] Fluorescent tubes typically are rated at about 10,000 to 15,000 hours, depending partly on the conditions of use, and incandescent light bulbs at 1,000–2,000 hours.[citation needed]

* Shock resistance: LEDs, being solid state components, are difficult to damage with external shock, unlike fluorescent and incandescent bulbs which are fragile.

* Focus: The solid package of the LED can be designed to focus its light. Incandescent and fluorescent sources often require an external reflector to collect light and direct it in a usable manner.

* Toxicity: LEDs do not contain mercury, unlike fluorescent lamps.

[edit] Disadvantages

* High price: LEDs are currently more expensive, price per lumen, on an initial capital cost basis, than most conventional lighting technologies. The additional expense partially stems from the relatively low lumen output and the drive circuitry and power supplies needed. However, when considering the total cost of ownership (including energy and maintenance costs), LEDs far surpass incandescent or halogen sources and begin to threaten compact fluorescent lamps[citation needed].

* Temperature dependence: LED performance largely depends on the ambient temperature of the operating environment. Over-driving the LED in high ambient temperatures may result in overheating of the LED package, eventually leading to device failure. Adequate heat-sinking is required to maintain long life. This is especially important when considering automotive, medical, and military applications where the device must operate over a large range of temperatures, and is required to have a low failure rate.

* Voltage sensitivity: LEDs must be supplied with the voltage above the threshold and a current below the rating. This can involve series resistors or current-regulated power supplies.[50]

* Light quality: Most cool-white LEDs have spectra that differ significantly from a black body radiator like the sun or an incandescent light. The spike at 460 nm and dip at 500 nm can cause the color of objects to be perceived differently under cool-white LED illumination than sunlight or incandescent sources, due to metamerism,[51] red surfaces being rendered particularly badly by typical phosphor based cool-white LEDs. However, the color rendering properties of common fluorescent lamps are often inferior to what is now available in state-of-art white LEDs.

* Area light source: LEDs do not approximate a “point source” of light, but rather a lambertian distribution. So LEDs are difficult to use in applications requiring a spherical light field. LEDs are not capable of providing divergence below a few degrees. This is contrasted with lasers, which can produce beams with divergences of 0.2 degrees or less.[52]

* Blue Hazard: There is increasing concern that blue LEDs and cool-white LEDs are now capable of exceeding safe limits of the so-called blue-light hazard as defined in eye safety specifications such as ANSI/IESNA RP-27.1-05: Recommended Practice for Photobiological Safety for Lamp and Lamp Systems.[53][54]

* Blue pollution: Because cool-white LEDs (i.e., LEDs with high color temperature) emit much more blue light than conventional outdoor light sources such as high-pressure sodium lamps, the strong wavelength dependence of Rayleigh scattering means that cool-white LEDs can cause more light pollution than other light sources. It is therefore very important that cool-white LEDs are fully shielded when used outdoors. Compared to low-pressure sodium lamps, which emit at 589.3 nm, the 460 nm emission spike of cool-white and blue LEDs is scattered about 2.7 times more by the Earth's atmosphere. Cool-white LEDs should not be used for outdoor lighting near astronomical observatories.

[edit] Applications

The many application of LEDs are very diverse but fall into three major categories: Visual signal application where the light goes more or less directly from the LED to the human eye, to convey a message or meaning. Illumination where LED light is reflected from object to give visual response of these objects. Finally LEDs are also used to generate light for measuring and interacting with processes that do not involve the human visual system.[55]

[edit] Indicators and signs

LED destination displays on buses, one with a colored route number.

Traffic light using LED

The low energy consumption, low maintenance and small size of modern LEDs has led to applications as status indicators and displays on a variety of equipment and installations. Large area LED displays are used as stadium displays and as dynamic decorative displays. Thin, lightweight message displays ars used at airports and railway stations, and as destination displays for trains, buses, trams, and ferries.

The single color light is well suited for traffic lights and signals, exit signs, emergency vehicle lighting, ships lanterns and LED-based Christmas lights. Red or yellow LEDs are used in indicator and alphanumeric displays in environments where night vision must be retained: aircraft cockpits, submarine and ship bridges, astronomy observatories, and in the field, e.g. night time animal watching and military field use.

Because of their long life and fast switching times, LEDs have been used for automotive high-mounted brake lights and truck and bus brake lights and turn signals for some time, but many high-end vehicles are now starting to use LEDs for their entire rear light clusters. The use of LEDs also has styling advantages because LEDs are capable of forming much thinner lights than incandescent lamps with parabolic reflectors. The significant improvement in the time taken to light up (perhaps 0.5s faster than an incandescent bulb) improves safety by giving drivers more time to react. It has been reported that at normal highway speeds this equals one car length increased reaction time for the car behind. White LED headlamps are beginning to make an appearance.

Due to the relative cheapness of low output LEDs they are also used in many temporary applications such as glowsticks and throwies and Lumalive, a photonic textile, artist have also used LEDs for Light art and more specifically LED art.

[edit] Lighting

Dropped ceiling with LED lamps

Flashlights and lanterns that utilize white LEDs are becoming increasingly popular because of their durability and longer battery life.

LED daytime running lights of Audi A4

With the development of high efficiency and high power LEDs it has become possible to incorporate LEDs in lighting and illumination. Replacement light bulbs have been made as well as dedicated fixtures and LED lamps. LEDs are used as street lights and in other architectural lighting where color changing is used. The mechanical robustness and long lifetime is used in automotive lighting on cars, motorcycles and on bicycle lights.

LEDs are also suitable for backlighting for LCD televisions and lightweight laptop displays and light source for DLP projectors. RGB LEDs increase the color gamut by as much as 45%.

The lack of IR/heat radiation makes LEDs ideal for stage lights using banks of RGB LEDs that can easily change color and decrease heating from traditional stage lighting, as well as medical lighting where IR-radiation can be harmful.

Since LEDs are small, durable and require little power they are used in hand held devices such as flashlights. LED strobe lights or camera flashes operate at a safe, low voltage, as opposed to the 250+ volts commonly found in xenon flashlamp-based lighting. This is particularly applicable to cameras on mobile phones, where space is at a premium and bulky voltage-increasing circuitry is undesirable. LEDs are used for infrared illumination in night vision applications including security cameras. A ring of LEDs around a video camera, aimed forward into a retroreflective background, allows chroma keying in video productions.

[edit] Smart lighting

Light can be used to transmit broadband data, which is already implemented in IrDA standards using infrared LEDs. Because LEDs can cycle on and off millions of times per second, they can, in effect, become wireless routers for data transport.[56] Lasers can also be modulated in this manner.

[edit] Non-visual applications

This section is in a list format that may be better presented using prose. You can help by converting this section to prose, if appropriate. Editing help is available. (March 2009)

LED panel light source used in an experiment on plant growth. The findings of such experiments may be used to grow food in space on long duration missions.

* Grow lights using LEDs to increase photosynthesis in plants[57]

* Remote controls, such as for TVs and VCRs, often use infrared LEDs.

* Movement sensors, for example in optical computer mice. The Nintendo Wii's sensor bar uses infrared LEDs.

* As light sensors

* In optical fiber and Free Space Optics communications.

* In pulse oximeters for measuring oxygen saturation

* Some flatbed scanners use arrays of RGB LEDs rather than the typical cold-cathode fluorescent lamp as the light source. Having independent control of three illuminated colors allows the scanner to calibrate itself for more accurate color balance, and there is no need for warm-up. Furthermore, its sensors only need be monochromatic, since at any one point in time the page being scanned is only lit by a single color of light.

* As UV curing devices for some ink and coating applications as well as LED printers.

* Sterilization of water and other substances using UV light.[31]

* Touch sensing: Since LEDs can also be used as photodiodes, they can be used for both photo emission and detection. This could be used in for example a touch-sensing screen that register reflected light from a finger or stylus.[58]

* Opto-isolators use an LED combined with a photodiode or phototransistor to provide a signal path with electrical isolation between two circuits. This is especially useful in medical equipment where the signals from a low voltage sensor circuit (usually battery powered) in contact with a living organism must be electrically isolated from any possible electrical failure in a recording or monitoring device operating at potentially dangerous voltages. An optoisolator also allows information to be transferred between circuits not sharing a common ground potential.

* LEDs have also been used as a medium quality voltage reference in electronic circuits. The forward voltage drop (e.g., about 1.7 V for a normal red LED) can be used instead of a Zener diode in low-voltage regulators. Although LED forward voltage is much more current-dependent than a good Zener, Zener diodes are not widely available below voltages of about 3 V.

[edit] Light sources for machine vision systems

This section is in a list format that may be better presented using prose. You can help by converting this section to prose, if appropriate. Editing help is available. (March 2009)

Machine vision systems often require bright and homogeneous illumination, so features of interest are easier to process.

LEDs are often used to this purpose, and this field of application is likely to remain one of the major application areas until price drops low enough to make signaling and illumination applications more widespread. Barcode scanners are the most common example of machine vision, and many inexpensive ones used red LEDs instead of lasers.

LEDs constitute a nearly ideal light source for machine vision systems for several main reasons:

* Size of illuminated field is usually comparatively small and Vision systems or smart camera are quite expensive, so cost of LEDs is usually a minor concern, compared to signaling applications.

* LED elements tend to be small and can be placed with high density over flat or even shaped substrates (PCBs etc) so that bright and homogeneous sources can be designed which direct light from tightly controlled directions on inspected parts.

* LEDs often have or can be used with small, inexpensive lenses and diffusers, helping to achieve high light densities and very good lighting control and homogeneity.

* LEDs can be easily strobed (in the microsecond range and below) and synchronized; their power also has reached high enough levels that sufficiently high intensity can be obtained, allowing well lit images even with very short light pulses: this is often used in order to obtain crisp and sharp “still” images of quickly-moving parts.

* LEDs come in several different colors and wavelengths, easily allowing to use the best color for each application, where different color may provide better visibility of features of interest. Having a precisely known spectrum allows tightly matched filters to be used to separate informative bandwidth or to reduce disturbing effect of ambient light.

* LEDs usually operate at comparatively low working temperatures, simplifying heat management and dissipation, therefore allowing plastic lenses, filters and diffusers to be used. Waterproof units can also easily be designed, allowing for use in harsh or wet environments (food, beverage, oil industries).

* LED sources can be shaped in several main configurations (spot lights for reflective illumination; ring lights for coaxial illumination; back lights for contour illumination; linear assemblies; flat, large format panels; dome sources for diffused, omnidirectional illumination).

* Very compact designs are possible, allowing for small LED illuminators to be integrated within smart cameras and vision sensors.

Seven-segment display

From Wikipedia, the free encyclopedia

Jump to: navigation, search

A typical 7-segment LED display component, with decimal point.

A seven-segment display (abbreviation: "7-seg(ment) display"), less commonly known as a seven-segment indicator, is a form of electronic display device for displaying decimal numerals that is an alternative to the more complex dot-matrix displays. Seven-segment displays are widely used in digital clocks, electronic meters, and other electronic devices for displaying numerical information.

Contents

[hide]

* 1 Concept and visual structure

* 2 Implementations

* 3 Alphabetic display

* 4 References

* 5 External links

* 6 See also

[edit] Concept and visual structure

The individual segments of a seven-segment display.

A seven segment display, as its name indicates, is composed of seven elements. Individually on or off, they can be combined to produce simplified representations of the Hindu-Arabic numerals. Often the seven segments are arranged in an oblique, or italic, arrangement, which aids readability.

Each of the numbers 0, 6, 7 and 9 may be represented by two or more different glyphs on seven-segment displays.

LED-based 7-segment display showing the 16 hex digits.

The seven segments are arranged as a rectangle of two vertical segments on each side with one horizontal segment on the top and bottom. Additionally, the seventh segment bisects the rectangle horizontally. There are also fourteen-segment displays and sixteen-segment displays (for full alphanumerics); however, these have mostly been replaced by dot-matrix displays.

The segments of a 7-segment display are referred to by the letters A to G, as shown to the right, where the optional DP decimal point (an "eighth segment") is used for the display of non-integer numbers.

The animation to the left cycles through the common glyphs of the ten decimal numerals and the six hexadecimal "letter digits" (A–F). It is an image sequence of a "LED" display, which is described technology-wise in the following section. Notice the variation between uppercase and lowercase letters for A–F; this is done to obtain a unique, unambiguous shape for each letter.

Seven segments are, effectively, the fewest required to represent each of the ten Hindu-Arabic numerals with a distinct and recognizable glyph. Bloggers have experimented with six-segment and even five-segment displays with such novel shapes as curves, angular blocks and serifs for segments; however, these often require complicated and/or non-uniform shapes and sometimes create unrecognizable glyphs.[1]

[edit] Implementations

An incandescent light type early seven-segment display.

A mechanical seven-segment display for displaying automotive fuel prices.

Seven-segment displays may use liquid crystal display (LCD), arrays of light-emitting diodes (LEDs), and other light-generating or controlling techniques such as cold cathode gas discharge, vacuum fluorescent, incandescent filaments, and others. For gasoline price totems and other large signs, vane displays made up of electromagnetically flipped light-reflecting segments (or "vanes") are still commonly used. An alternative to the 7-segment display in the 1950s through the 1970s was the cold-cathode, neon-lamp-like nixie tube. Starting in 1970, RCA sold a display device known as the Numitron that used incandescent filaments arranged into a seven-segment display. [2]

In a simple LED package, each LED is typically connected with one terminal to its own pin on the outside of the package and the other LED terminal connected in common with all other LEDs in the device and brought out to a shared pin. This shared pin will then make up all of the cathodes (negative terminals) OR all of the anodes (positive terminals) of the LEDs in the device; and so will be either a "Common Cathode" or "Common Anode" device depending how it is constructed. Hence a 7 segment plus DP package will only require nine pins to be present and connected.

Integrated displays also exist, with single or multiple digits. Some of these integrated displays incorporate their own internal decoder, though most do not – each individual LED is brought out to a connecting pin as described. Multiple-digit LED displays as used in pocket calculators and similar devices used multiplexed displays to reduce the number of IC pins required to control the display. For example, all the anodes of the A segments of each digit position would be connected together and to a driver pin, while the cathodes of all segments for each digit would be connected. To operate any particular segment of any digit, the controlling integrated circuit would turn on the cathode driver for the selected digit, and the anode drivers for the desired segments; then after a short blanking interval the next digit would be selected and new segments lit, in a sequential fashion. In this manner an eight digit display with seven segments and a decimal point would require only 8 cathode drivers and 8 anode drivers, instead of sixty-four drivers and IC pins. Often in pocket calculators the digit drive lines would be used to scan the keyboard as well, providing further savings; however, pressing multiple keys at once would produce odd results on the multiplexed display.

Seven segment displays can be found in patents as early as 1908 (in U.S. Patent 974,943 , F W Wood invented an 8-segment display, which displayed the number 4 using a diagonal bar), but did not achieve widespread use until the advent of LEDs in the 1970s. They are sometimes even used in unsophisticated displays like cardboard "For sale" signs, where the user either applies color to pre-printed segments, or (spray)paints color through a seven-segment digit template, to compose figures such as product prices or telephone numbers.

For many applications, dot-matrix LCDs have largely superseded LED displays, though even in LCDs 7-segment displays are very common. Unlike LEDs, the shapes of elements in an LCD panel are arbitrary since they are formed on the display by a kind of printing process. In contrast, the shapes of LED segments tend to be simple rectangles, reflecting the fact that they have to be physically moulded to shape, which makes it difficult to form more complex shapes than the segments of 7-segment displays. However, the high common recognition factor of 7-segment displays, and the comparatively high visual contrast obtained by such displays relative to dot-matrix digits, makes seven-segment multiple-digit LCD screens very common on basic calculators.

[edit] Alphabetic display

Main article: Seven-segment display character representations

In addition to the ten numerals, seven segment displays can be used to show letters of the latin, cyrillic and greek alphabets including punctuation, but only few representations are unambiguous and intuitive at the same time: uppercase A, B, C, E, F, G, H, I, J, L, O, P, S, U, Y, Z, and lowercase a, b, c, d, g, h, i, n,o, q, r, t, u. Thus, ad hoc and corporate solutions dominate the field of alphabetics on seven-segment displays, which is usually not considered essential and only used for basic notifications, such as internal test messages on equipment under development.

Using a restricted range of letters that look like (upside-down) digits, seven-segment displays are commonly used by school children to form words and phrases using a technique known as "calculator spelling".

The client-server software architecture model distinguishes client systems from server systems, which communicate over a computer network. A client-server application is a distributed system comprising both client and server software. A client software process may initiate a communication session, while the server waits for requests from any client.

Contents

[hide]

* 1 Description

* 2 Comparison to Peer-to-Peer architecture

* 3 Comparison to Client-Queue-Client architecture

* 4 Advantages

* 5 Disadvantages

* 6 See also

[edit] Description

Client-server describes the relationship between two computer programs in which one program, the client program, makes a service request to another, the server program. Standard networked functions such as email exchange, web access and database access, are based on the client-server model. For example, a web browser is a client program at the user computer that may access information at any web server in the world. To check your bank account from your computer, a web browser client program in your computer forwards your request to a web server program at the bank. That program may in turn forward the request to its own database client program that sends a request to a database server at another bank computer to retrieve your account balance. The balance is returned to the bank database client, which in turn serves it back to the web browser client in your personal computer, which displays the information for you.

The client-server model has become one of the central ideas of network computing. Most business applications being written today use the client-server model. So do the Internet's main application protocols, such as HTTP, SMTP, Telnet, DNS, etc. In marketing, the term has been used to distinguish distributed computing by smaller dispersed computers from the "monolithic" centralized computing of mainframe computers. But this distinction has largely disappeared as mainframes and their applications have also turned to the client-server model and become part of network computing.

Each instance of the client software can send data requests to one or more connected servers. In turn, the servers can accept these requests, process them, and return the requested information to the client. Although this concept can be applied for a variety of reasons to many different kinds of applications, the architecture remains fundamentally the same.

The most basic type of client-server architecture employs only two types of hosts: clients and servers. This type of architecture is sometimes referred to as two-tier. It allows devices to share files and resources. The two tier architecture means that the client acts as one tier and application in combination with server acts as another tier.

These days, clients are most often web browsers, although that has not always been the case. Servers typically include web servers, database servers and mail servers. Online gaming is usually client-server too. In the specific case of MMORPG, the servers are typically operated by the company selling the game; for other games one of the players will act as the host by setting his game in server mode.

The interaction between client and server is often described using sequence diagrams. Sequence diagrams are standardized in the Unified Modeling Language.

When both the client- and server-software are running on the same computer, this is called a single seat setup.

Specific types of clients include web browsers, email clients, and online chat clients.

Specific types of servers include web servers, ftp servers, application servers, database servers, mail servers, file servers, print servers, and terminal servers. Most web services are also types of servers.

[edit] Comparison to Peer-to-Peer architecture

Another type of network architecture is known as peer-to-peer, because each host or instance of the program can simultaneously act as both a client and a server, and because each has equivalent responsibilities and status. Peer-to-peer architectures are often abbreviated using the acronym P2P.

Both client-server and P2P architectures are in wide usage today. You can find more details in Comparison of Centralized (Client-Server) and Decentralized (Peer-to-Peer) Networking. both client server and a2dp will work on windows and Linux.

[edit] Comparison to Client-Queue-Client architecture

While classic Client-Server architecture requires one of the communication endpoints to act as a server, which is much harder to implement,[citation needed] Client-Queue-Client allows all endpoints to be simple clients, while the server consists of some external software, which also acts as passive queue (one software instance passes its query to another instance to queue, e.g. database, and then this other instance pulls it from database, makes a response, passes it to database etc.). This architecture allows greatly simplified software implementation. Peer-to-Peer architecture was originally based on Client-Queue-Client concept.

[edit] Advantages

* In most cases, a client-server architecture enables the roles and responsibilities of a computing system to be distributed among several independent computers that are known to each other only through a network. This creates an additional advantage to this architecture: greater ease of maintenance. For example, it is possible to replace, repair, upgrade, or even relocate a server while its clients remain both unaware and unaffected by that change. This independence from change is also referred to as encapsulation.

* All the data is stored on the servers, which generally have far greater security controls than most clients. Servers can better control access and resources, to guarantee that only those clients with the appropriate permissions may access and change data.

* Since data storage is centralized, updates to that data are far easier to administer than what would be possible under a P2P paradigm. Under a P2P architecture, data updates may need to be distributed and applied to each "peer" in the network, which is both time-consuming and error-prone, as there can be thousands or even millions of peers.

* Many mature client-server technologies are already available which were designed to ensure security, 'friendliness' of the user interface, and ease of use.

* It functions with multiple different clients of different capabilities.

* People in the field of information system can use client/server computing to make their jobs easier.

* Reduces the total cost of ownership.

* Increases Productivity

* End User Productivity

* Developer Productivity

[edit] Disadvantages

* Traffic congestion on the network has been an issue since the inception of the client-server paradigm. As the number of simultaneous client requests to a given server increases, the server can become severely overloaded. Contrast that to a P2P network, where its bandwidth actually increases as more nodes are added, since the P2P network's overall bandwidth can be roughly computed as the sum of the bandwidths of every node in that network.

* The client-server paradigm lacks the robustness of a good P2P network. Under client-server, should a critical server fail, clients’ requests cannot be fulfilled. In P2P networks, resources are usually distributed among many nodes. Even if one or more nodes depart and abandon a downloading file, for example, the remaining nodes should still have the data needed to complete the download

Transistor–transistor logic

From Wikipedia, the free encyclopedia

Jump to: navigation, search

A Motorola 68000-based computer with various TTL chips mounted on protoboards.

Transistor–transistor logic (TTL) is a class of digital circuits built from bipolar junction transistors (BJT) and resistors. It is called transistor–transistor logic because both the logic gating function (e.g., AND) and the amplifying function are performed by transistors (contrast this with RTL and DTL).

TTL is notable for being a widespread integrated circuit (IC) family used in many applications such as computers, industrial controls, test equipment and instrumentation, consumer electronics, synthesizers, etc. The designation TTL is sometimes used to mean TTL-compatible logic levels, even when not associated directly with TTL integrated circuits, for example as a label on the inputs and outputs of electronic instruments.[1]

Contents

[hide]

* 1 History

* 2 Theory

* 3 Packaging

* 4 Comparison with other logic families

* 5 Sub-types

* 6 Inverters as analog amplifiers

* 7 Applications

* 8 Captive manufacture

* 9 See also

* 10 Notes

* 11 References

* 12 External links

[edit] History

A real-time clock built of TTL chips designed about 1979.

TTL was invented in 1961 by James L. Buie of TRW, "particularly suited to the newly developing integrated circuit design technology."[2]

The first commercial integrated-circuit TTL devices were manufactured by Sylvania in 1963, called the Sylvania Universal High-Level Logic family (SUHL).[3] The Sylvania parts were used in the controls of the Phoenix missile.[4] TTL became popular with electronic systems designers after Texas Instruments introduced the 5400 series of ICs, with military temperature range, in 1964 and the later 7400 series, specified over a lower range, in 1966.

The Texas Instruments 7400 family became an industry standard. Compatible parts were made by Motorola, AMD, Fairchild, Intel, Intersil, Signetics, Mullard, Siemens, SGS-Thomson and National Semiconductor,[5] [6] and many other companies, even in the former Soviet Union.[citation needed] Not only did others make compatible TTL parts, but compatible parts were made using many other circuit technologies as well.

The term "TTL" is applied to many successive generations of bipolar logic, with gradual improvements in speed and power consumption over about two decades. The last widely available family, 74AS/ALS Advanced Schottky, was introduced in 1985.[7] As of 2008, Texas Instruments continues to supply the more general-purpose chips in numerous obsolete technology families, albeit at increased prices. Typically, TTL chips integrate no more than a few hundred transistors each. Functions within a single package generally range from a few logic gates to a microprocessor bit-slice. TTL also became important because its low cost made digital techniques economically practical for tasks previously done by analog methods.[8]

The Kenbak-1, one of the first personal computers, used TTL for its CPU instead of a microprocessor chip, which was not available in 1971.[9] The 1973 Xerox Alto and 1981 Star workstations, which introduced the graphical user interface, used TTL circuits integrated at the level of ALUs and bitslices, respectively. Most computers used TTL-compatible logic between larger chips well into the 1990s. Until the advent of programmable logic, discrete bipolar logic was used to prototype and emulate microarchitectures under development.

[edit] Theory

Simplified schematic of a two-input TTL NAND gate.

Standard TTL NAND, one of four in 7400

TTL contrasts with the preceding resistor–transistor logic (RTL) and diode–transistor logic (DTL) generations by using transistors not only to amplify the output but also to isolate the inputs. The p-n junction of a diode has considerable capacitance, so changing the logic level of an input connected to a diode, as in DTL, requires considerable time and energy.

As shown in the top schematic at right, the fundamental concept of TTL is to isolate the inputs by using a common-base connection, and amplify the function using a common emitter connection. Note that the base of the output transistor is driven high only by the forward-biased base–collector junction of the input transistor. The second schematic adds to this a "totem-pole output". When V2 is off (output equals 1), the resistors turn V3 on and V4 off, resulting in a stronger 1 output. When V2 is on, it activates V4, driving 0 to the output. The diode forces the emitter of V3 to ~0.7 V, while R2, R4 are chosen to pull its base to a lower voltage, turning it off. By removing pull-up and pull-down resistors from the output stage, this allows the strength of the gate to be increased without proportionally affecting power consumption.[10][11]

TTL is particularly well suited to integrated circuits because the inputs of a gate may all be integrated into a single base region to form a multiple-emitter transistor. Such a highly customized part might increase the cost of a circuit where each transistor is in a separate package, but, by combining several small on-chip components into one larger device, it conversely reduces the cost of implementation on an IC.

As with all bipolar logic, a small current must be drawn from a TTL input to ensure proper logic levels. The total current drawn must be within the capacities of the preceding stage, which limits the number of nodes that can be connected (the fanout).

All standardized common TTL circuits operate with a 5-volt power supply. A TTL input signal is defined as "low" when between 0 V and 0.8 V with respect to the ground terminal, and "high" when between 2.2 V and 5 V[12] (precise logic levels vary slightly between sub-types). TTL outputs are typically restricted to narrower limits of between 0 V and 0.4 V for a "low" and between 2.6 V and 5 V for a "high", providing 0.4V of noise immunity. Standardization of the TTL levels was so ubiquitous that complex circuit boards often contained TTL chips made by many different manufacturers selected for availability and cost, compatibility being assured; two circuit board units off the same assembly line on different successive days or weeks might have a different mix of brands of chips in the same positions on the board. Within usefully broad limits, logic gates could be treated as ideal Boolean devices without concern for electrical limitations.

[edit] Packaging

Like most integrated circuits of the period 1965–1990, TTL devices were usually packaged in through-hole, dual in-line packages with between 14 and 24 lead wires, usually made of epoxy plastic (PDIP) or sometimes of ceramic (CDIP). Standard DIP packages have pins positioned on a rectangular grid with 0.1 inch spacing, and most or all TTL ICs in DIP packages used this spacing (though some other ASIC chips have been packaged in through-hole DIP packages with finer pin spacing); 14- and 16-pin DIP packages (with the two rows of pins spaced 0.3 inches apart) were most common for TTL ICs. Beam-lead chip dice without packages were made for assembly into larger arrays as hybrid integrated circuits. Parts for military and aerospace applications were packaged in flat packs, a form of surface-mount package, with leads suitable for welding or soldering to printed circuit boards. Today, many TTL-compatible devices are available in surface-mount packages, which are available in a wider array of types than through-hole packages.

[edit] Comparison with other logic families

Main article: logic family

TTL devices consume substantially more power than equivalent CMOS devices at rest, but power consumption does not increase with clock speed as rapidly as for CMOS devices. Compared to contemporary ECL circuits, TTL uses less power and has easier design rules but is substantially slower. Designers can combine ECL and TTL devices in the same system to achieve best overall performance and economy, but level-shifting devices are required between the two logic families. TTL is less sensitive to damage from electrostatic discharge than early CMOS devices.

Due to the output structure of TTL devices, the output impedance is asymmetrical between the high and low state, making them unsuitable for driving transmission lines. This drawback is usually overcome by buffering the outputs with special line-driver devices where signals need to be sent through cables. ECL, by virtue of its symmetric low-impedance output structure, does not have this drawback.

The TTL "totem-pole" output structure often has a momentary overlap when both the upper and lower transistors are conducting, resulting in a substantial pulse of current drawn from the supply. These pulses can couple in unexpected ways between multiple integrated circuit packages, resulting in reduced remaining noise margin and lower performance. TTL systems usually have a decoupling capacitor for every one or two IC packages, so that a current pulse from one chip does not momentarily reduce the supply voltage to the others.

Several manufacturers now supply CMOS logic equivalents with TTL-compatible input and output levels, usually bearing part numbers similar to the equivalent TTL component and with the same pinouts. For example, the 74HCT00 series provides many drop-in replacements for bipolar 7400 series parts, but uses CMOS technology.

[edit] Sub-types

Successive generations of technology produced compatible parts with improved power consumption or switching speed, or both. Although vendors uniformly marketed these various product lines as TTL with Schottky diodes, some of the underlying circuits, such as used in the LS family, could rather be considered DTL.[13]

Variations of and successors to the basic TTL family, which has a typical gate propagation delay of 10ns and a power dissipation of 10mW per gate, for a power-delay product (PDP) or switching energy of about 100 pJ, include:

* Low-power TTL (L), which traded switching speed (33ns) for a reduction in power consumption (1mW) (now essentially replaced by CMOS logic)

* High-speed TTL (H), with faster switching than standard TTL (6ns) but significantly higher power dissipation (22mW)

* Schottky TTL (S), introduced in 1969, which used Schottky diode clamps at gate inputs to prevent charge storage and improve switching time. These gates operated more quickly (3ns) but had higher power dissipation (19mW)

* Low-power Schottky TTL (LS) — used the higher resistance values of low-power TTL and the Schottky diodes to provide a good combination of speed (9.5ns) and reduced power consumption (2mW), and PDP of about 20 pJ. Probably the most common type of TTL, these were used as glue logic in microcomputers, essentially replacing the former H, L, and S sub-families.

* Fast (F) and Advanced-Schottky (AS) variants of LS from Fairchild and TI, respectively, circa 1985, with "Miller-killer" circuits to speed up the low-to-high transition. These families achieved PDPs of 10 pJ and 4 pJ, respectively, the lowest of all the TTL families.

* Most manufacturers offer commercial and extended temperature ranges: for example Texas Instruments 7400 series parts are rated from 0 to 70°C, and 5400 series devices over the military-specification temperature range of −55 to +125°C.

* Radiation-hardened devices are offered for space applications

* Special quality levels and high-reliability parts are available for military and aerospace applications.

* Low-voltage TTL (LVTTL) for 3.3-volt power supplies and memory interfacing.

[edit] Inverters as analog amplifiers

While designed for use with logic-level digital signals, a TTL inverter can be biased to be used as an analog amplifier. Such amplifiers may be useful in instruments that must convert analog signals to the digital domain but would not ordinarily be used where analog amplification is the primary purpose. [14] TTL inverters can also be used in crystal oscillators where their analog amplification ability is significant in analysis of oscillator performance.

[edit] Applications

Before the advent of VLSI devices, TTL integrated circuits were a standard method of construction for the processors of mini-computer and mainframe processors; such as the DEC VAX and Data General Eclipse, and for equipment such as machine tool numerical controls, printers and video display terminals. As microprocessors became more functional, TTL devices became important for "glue logic" applications, such as fast bus drivers on a motherboard, which tie together the function blocks realized in VLSI elements.

[edit] Captive manufacture

At least one manufacturer, IBM, produced non-compatible TTL circuits for its own use; IBM used the technology in the IBM System/38, IBM 4300, and IBM 3081.[15]

[edit] See also

* Resistor–transistor logic (RTL)

* Diode–transistor logic (DTL)

* Emitter-coupled logic (ECL)

* Positive emitter-coupled logic (PECL)

* Complementary metal–oxide–semiconductor (CMOS)

* Integrated injection logic (I2L)

* Digital circuit

* Logic family

[edit] Notes

1. ^ Eren, H., 2003.

2. ^ Buie, J., 1966.

3. ^ The Computer History Museum, 2007.

4. ^ The Computer History Museum, 2007.

5. ^ Engineering Staff, 1973.

6. ^ L.W. Turner,(ed), Electronics Engineer's Reference Book, 4th ed. Newnes-Butterworth, London 1976 ISBN 0 408 00168

7. ^ Texas Instruments, 1985

8. ^ Lancaster, 1975, preface.

9. ^ Klein, 2008.

10. ^ Transistor-Transistor Logic (TTL), 2005, p. 1.]

11. ^ Tala, 2006.

12. ^ TTL standard logic level, n.d.

13. ^ Ayers, n.d.

14. ^ Wobschall, 1987, pp. 209-211.

15. ^ Pittler, Powers, and Schnabel 1982, 5

[edit] References

* Ayers, J. UConn EE 215 notes for lecture 4. Harvard University faculty web page. Archive of web page from University of Connecticut. n.d. Retrieved 17 September 2008.

* Buie, J. Coupling Transistor Logic and Other Circuits. (U.S. Patent 3,283,170). 1 November 1966. United States Patent and Trademark Office. 1 November 1966.

* The Computer History Museum. 1963 - Standard Logic Families Introduced. 2007. Retrieved 16 April 16 2008.

* Engineering Staff. The TTL Data Book for Design Engineers. 1st Ed. Dallas: Texas Instruments. 1973.

* Eren, H. Electronic Portable Instruments: Design and Applications. CRC Press. 2003. ISBN 0849319986. Google preview available.

* Fairchild Semiconductor. An Introduction to and Comparison of 74HCT TTL Compatible CMOS Logic (Application Note 368). 1984. (for relative ESD sensitivity of TTL and CMOS.)

* Horowitz, P. and Winfield Hill, W. The Art of Electronics. 2nd Ed. Cambridge University Press. 1989. ISBN 0-521-37095-7

* Klein, E. Kenbak-1. Vintage-. 2008.

* Lancaster, D. TTL Cookbook. Indianapolis: Howard W. Sams and Co. 1975. ISBN 0-672-21035-5.

* Millman, J. Microelectronics Digital and Analog Circuits and Systems. New York:McGraw-Hill Book Company. 1979. ISBN 0-07-042327-X

* Pittler, M.S., Powers, D.M., and Schnabel, D.L. System development and technology aspects of the IBM 3081 Processor Complex. IBM Journal of Research and Development. 26 (1982), no. 1:2–11.

* Standard TTL logic levels. n.d. Twisted Pair Software.

* Tala, D. K. Digital Logic Gates Part-V. asic-. 2006.

* Texas Instruments. Advanced Schottky Family. 1985. Retrieved 17 September 2008.

* Transistor-Transistor Logic (TTL). . 2005. Retrieved 17 September 2008.

* Wobschall, D. Circuit Design for Electronic Instrumentation: Analog and Digital Devices from Sensor to Display. 2d edition. New York: McGraw Hill 1987. ISBN 0-07-071232-8

[edit] External links

ntegrated circuit

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Integrated circuit of Atmel Diopsis 740 System on Chip showing memory blocks, logic and input/output pads around the periphery

Microchips (EPROM memory) with a transparent window, showing the integrated circuit inside. Note the fine silver-colored wires that connect the integrated circuit to the pins of the package. The window allows the memory contents of the chip to be erased, by exposure to strong ultraviolet light in an eraser device.

In electronics, an integrated circuit (also known as IC, microcircuit, microchip, silicon chip, or chip) is a miniaturized electronic circuit (consisting mainly of semiconductor devices, as well as passive components) that has been manufactured in the surface of a thin substrate of semiconductor material. Integrated circuits are used in almost all electronic equipment in use today and have revolutionized the world of electronics.

A hybrid integrated circuit is a miniaturized electronic circuit constructed of individual semiconductor devices, as well as passive components, bonded to a substrate or circuit board.

This article is about monolithic integrated circuits.

Introduction

Synthetic detail of an integrated circuit through four layers of planarized copper interconnect, down to the polysilicon (pink), wells (greyish), and substrate (green).

Integrated circuits were made possible by experimental discoveries which showed that semiconductor devices could perform the functions of vacuum tubes, and by mid-20th-century technology advancements in semiconductor device fabrication. The integration of large numbers of tiny transistors into a small chip was an enormous improvement over the manual assembly of circuits using discrete electronic components. The integrated circuit's mass production capability, reliability, and building-block approach to circuit design ensured the rapid adoption of standardized ICs in place of designs using discrete transistors.

There are two main advantages of ICs over discrete circuits: cost and performance. Cost is low because the chips, with all their components, are printed as a unit by photolithography and not constructed one transistor at a time. Furthermore, much less material is used to construct a circuit as a packaged IC die than as a discrete circuit. Performance is high since the components switch quickly and consume little power (compared to their discrete counterparts), because the components are small and close together. As of 2006, chip areas range from a few square mm to around 350 mm², with up to 1 million transistors per mm².

[edit] Invention

Jack Kilby's original integrated circuit

The integrated circuit was conceived by a radar scientist, Geoffrey W.A. Dummer (1909-2002), working for the Royal Radar Establishment of the British Ministry of Defence, and published at the Symposium on Progress in Quality Electronic Components in Washington, D.C. on May 7, 1952.[1] He gave many symposia publicly to propagate his ideas.

Dummer unsuccessfully attempted to build such a circuit in 1956.

The integrated circuit can be credited as being invented by both Jack Kilby of Texas Instruments[2] and Robert Noyce of Fairchild Semiconductor [3] working independently of each other. Kilby recorded his initial ideas concerning the integrated circuit in July 1958 and successfully demonstrated the first working integrated circuit on September 12, 1958.[2] Kilby won the 2000 Nobel Prize in Physics for his part of the invention of the integrated circuit.[4] Robert Noyce also came up with his own idea of integrated circuit, half a year later than Kilby. Noyce's chip had solved many practical problems that the microchip developed by Kilby had not. Noyce's chip, made at Fairchild, was made of silicon, whereas Kilby's chip was made of germanium.

Early developments of the integrated circuit go back to 1949, when the German engineer Werner Jacobi (Siemens AG) filed a patent for an integrated-circuit-like semiconductor amplifying device [5] showing five transistors on a common substrate arranged in a 2-stage amplifier arrangement. Jacobi discloses small and cheap hearing aids as typical industrial applications of his patent. A commercial use of his patent has not been reported.

A precursor idea to the IC was to create small ceramic squares (wafers), each one containing a single miniaturized component. Components could then be integrated and wired into a bidimensional or tridimensional compact grid. This idea, which looked very promising in 1957, was proposed to the US Army by Jack Kilby, and led to the short-lived Micromodule Program (similar to 1951's Project Tinkertoy).[6] However, as the project was gaining momentum, Kilby came up with a new, revolutionary design: the IC.

The aforementioned Noyce credited Kurt Lehovec of Sprague Electric for the principle of p-n junction isolation caused by the action of a biased p-n junction (the diode) as a key concept behind the IC.[7]

See: Other variations of vacuum tubes for precursor concepts such as the Loewe 3NF.

[edit] Generations

[edit] SSI, MSI and LSI

The first integrated circuits contained only a few transistors. Called "Small-Scale Integration" (SSI), they used circuits containing transistors numbering in the tens.

SSI circuits were crucial to early aerospace projects, and vice-versa. Both the Minuteman missile and Apollo program needed lightweight digital computers for their inertial guidance systems; the Apollo guidance computer led and motivated the integrated-circuit technology[citation needed], while the Minuteman missile forced it into mass-production.

These programs purchased almost all of the available integrated circuits from 1960 through 1963, and almost alone provided the demand that funded the production improvements to get the production costs from $1000/circuit (in 1960 dollars) to merely $25/circuit (in 1963 dollars).[citation needed] They began to appear in consumer products at the turn of the decade, a typical application being FM inter-carrier sound processing in television receivers.

The next step in the development of integrated circuits, taken in the late 1960s, introduced devices which contained hundreds of transistors on each chip, called "Medium-Scale Integration" (MSI).

They were attractive economically because while they cost little more to produce than SSI devices, they allowed more complex systems to be produced using smaller circuit boards, less assembly work (because of fewer separate components), and a number of other advantages.

Further development, driven by the same economic factors, led to "Large-Scale Integration" (LSI) in the mid 1970s, with tens of thousands of transistors per chip.

Integrated circuits such as 1K-bit RAMs, calculator chips, and the first microprocessors, that began to be manufactured in moderate quantities in the early 1970s, had under 4000 transistors. True LSI circuits, approaching 10000 transistors, began to be produced around 1974, for computer main memories and second-generation microprocessors.

[edit] VLSI

Main article: Very-large-scale integration

Upper interconnect layers on an Intel 80486DX2 microprocessor die.

The final step in the development process, starting in the 1980s and continuing through the present, was "Very Large-Scale Integration" (VLSI). This could be said to start with hundreds of thousands of transistors in the early 1980s, and continues beyond several billion transistors as of 2007.

There was no single breakthrough that allowed this increase in complexity, though many factors helped. Manufacturing moved to smaller rules and cleaner fabs, allowing them to produce chips with more transistors with adequate yield, as summarized by the International Technology Roadmap for Semiconductors (ITRS). Design tools improved enough to make it practical to finish these designs in a reasonable time. The more energy efficient CMOS replaced NMOS and PMOS, avoiding a prohibitive increase in power consumption. Better texts such as the landmark textbook by Mead and Conway helped schools educate more designers, among other factors.

In 1986 the first one megabit RAM chips were introduced, which contained more than one million transistors. Microprocessor chips passed the million transistor mark in 1989 and the billion transistor mark in 2005[8]. The trend continues largely unabated, with chips introduced in 2007 containing tens of billions of memory transistors [9].

[edit] ULSI, WSI, SOC and 3D-IC

To reflect further growth of the complexity, the term ULSI that stands for "Ultra-Large Scale Integration" was proposed for chips of complexity of more than 1 million transistors.

Wafer-scale integration (WSI) is a system of building very-large integrated circuits that uses an entire silicon wafer to produce a single "super-chip". Through a combination of large size and reduced packaging, WSI could lead to dramatically reduced costs for some systems, notably massively parallel supercomputers. The name is taken from the term Very-Large-Scale Integration, the current state of the art when WSI was being developed.

System-on-a-Chip (SoC or SOC) is an integrated circuit in which all the components needed for a computer or other system are included on a single chip. The design of such a device can be complex and costly, and building disparate components on a single piece of silicon may compromise the efficiency of some elements. However, these drawbacks are offset by lower manufacturing and assembly costs and by a greatly reduced power budget: because signals among the components are kept on-die, much less power is required (see Packaging, above).

Three Dimensional Integrated Circuit (3D-IC) has two or more layers of active electronic components that are integrated both vertically and horizontally into a single circuit. Communication between layers uses on-die signaling, so power consumption is much lower than in equivalent separate circuits. Judicious use of short vertical wires can substantially reduce overall wire length for faster operation.

[edit] Advances in integrated circuits

The integrated circuit from an Intel 8742, an 8-bit microcontroller that includes a CPU running at 12 MHz, 128 bytes of RAM, 2048 bytes of EPROM, and I/O in the same chip.

Among the most advanced integrated circuits are the microprocessors or "cores", which control everything from computers to cellular phones to digital microwave ovens. Digital memory chips and ASICs are examples of other families of integrated circuits that are important to the modern information society. While cost of designing and developing a complex integrated circuit is quite high, when spread across typically millions of production units the individual IC cost is minimized. The performance of ICs is high because the small size allows short traces which in turn allows low power logic (such as CMOS) to be used at fast switching speeds.

ICs have consistently migrated to smaller feature sizes over the years, allowing more circuitry to be packed on each chip. This increased capacity per unit area can be used to decrease cost and/or increase functionality—see Moore's law which, in its modern interpretation, states that the number of transistors in an integrated circuit doubles every two years. In general, as the feature size shrinks, almost everything improves—the cost per unit and the switching power consumption go down, and the speed goes up. However, ICs with nanometer-scale devices are not without their problems, principal among which is leakage current (see subthreshold leakage for a discussion of this), although these problems are not insurmountable and will likely be solved or at least ameliorated by the introduction of high-k dielectrics. Since these speed and power consumption gains are apparent to the end user, there is fierce competition among the manufacturers to use finer geometries. This process, and the expected progress over the next few years, is well described by the International Technology Roadmap for Semiconductors (ITRS).

[edit] Popularity of ICs

Main article: Microchip revolution

Only a half century after their development was initiated, integrated circuits have become ubiquitous. Computers, cellular phones, and other digital appliances are now inextricable parts of the structure of modern societies. That is, modern computing, communications, manufacturing and transport systems, including the Internet, all depend on the existence of integrated circuits. Indeed, many scholars believe that the digital revolution—brought about by the microchip revolution—was one of the most significant occurrences in the history of humankind.

[edit] Classification

A CMOS 4000 IC in a DIP

Integrated circuits can be classified into analog, digital and mixed signal (both analog and digital on the same chip).

Digital integrated circuits can contain anything from one to millions of logic gates, flip-flops, multiplexers, and other circuits in a few square millimeters. The small size of these circuits allows high speed, low power dissipation, and reduced manufacturing cost compared with board-level integration. These digital ICs, typically microprocessors, DSPs, and micro controllers work using binary mathematics to process "one" and "zero" signals.

Analog ICs, such as sensors, power management circuits, and operational amplifiers, work by processing continuous signals. They perform functions like amplification, active filtering, demodulation, mixing, etc. Analog ICs ease the burden on circuit designers by having expertly designed analog circuits available instead of designing a difficult analog circuit from scratch.

ICs can also combine analog and digital circuits on a single chip to create functions such as A/D converters and D/A converters. Such circuits offer smaller size and lower cost, but must carefully account for signal interference.

[edit] Manufacture

[edit] Fabrication

Main article: Semiconductor fabrication

Rendering of a small standard cell with three metal layers (dielectric has been removed). The sand-colored structures are metal interconnect, with the vertical pillars being contacts, typically plugs of tungsten. The reddish structures are polysilicon gates, and the solid at the bottom is the crystalline silicon bulk.

The semiconductors of the periodic table of the chemical elements were identified as the most likely materials for a solid state vacuum tube by researchers like William Shockley at Bell Laboratories starting in the 1930s. Starting with copper oxide, proceeding to germanium, then silicon, the materials were systematically studied in the 1940s and 1950s. Today, silicon monocrystals are the main substrate used for integrated circuits (ICs) although some III-V compounds of the periodic table such as gallium arsenide are used for specialized applications like LEDs, lasers, solar cells and the highest-speed integrated circuits. It took decades to perfect methods of creating crystals without defects in the crystalline structure of the semiconducting material.

Semiconductor ICs are fabricated in a layer process which includes these key process steps:

* Imaging

* Deposition

* Etching

The main process steps are supplemented by doping and cleaning.

Mono-crystal silicon wafers (or for special applications, silicon on sapphire or gallium arsenide wafers) are used as the substrate. Photolithography is used to mark different areas of the substrate to be doped or to have polysilicon, insulators or metal (typically aluminum) tracks deposited on them.

* Integrated circuits are composed of many overlapping layers, each defined by photolithography, and normally shown in different colors. Some layers mark where various dopants are diffused into the substrate (called diffusion layers), some define where additional ions are implanted (implant layers), some define the conductors (polysilicon or metal layers), and some define the connections between the conducting layers (via or contact layers). All components are constructed from a specific combination of these layers.

* In a self-aligned CMOS process, a transistor is formed wherever the gate layer (polysilicon or metal) crosses a diffusion layer.

* Capacitive structures, in form very much like the parallel conducting plates of a traditional electrical capacitor, are formed according to the area of the "plates", with insulating material between the plates. Capacitors of a wide range of sizes are common on ICs.

* Meandering stripes of varying lengths are sometimes used to form on-chip resistors, though most logic circuits do not need any resistors. The ratio of the length of the resistive structure to its width, combined with its sheet resistivity, determines the resistance.

* More rarely, inductive structures can be built as tiny on-chip coils, or simulated by gyrators.

Since a CMOS device only draws current on the transition between logic states, CMOS devices consume much less current than bipolar devices.

A random access memory is the most regular type of integrated circuit; the highest density devices are thus memories; but even a microprocessor will have memory on the chip. (See the regular array structure at the bottom of the first image.) Although the structures are intricate – with widths which have been shrinking for decades – the layers remain much thinner than the device widths. The layers of material are fabricated much like a photographic process, although light waves in the visible spectrum cannot be used to "expose" a layer of material, as they would be too large for the features. Thus photons of higher frequencies (typically ultraviolet) are used to create the patterns for each layer. Because each feature is so small, electron microscopes are essential tools for a process engineer who might be debugging a fabrication process.

Each device is tested before packaging using automated test equipment (ATE), in a process known as wafer testing, or wafer probing. The wafer is then cut into rectangular blocks, each of which is called a die. Each good die (plural dice, dies, or die) is then connected into a package using aluminum (or gold) bond wires which are welded to pads, usually found around the edge of the die. After packaging, the devices go through final testing on the same or similar ATE used during wafer probing. Test cost can account for over 25% of the cost of fabrication on lower cost products, but can be negligible on low yielding, larger, and/or higher cost devices.

As of 2005, a fabrication facility (commonly known as a semiconductor lab) costs over a billion US Dollars to construct[10], because much of the operation is automated. The most advanced processes employ the following techniques:

* The wafers are up to 300 mm in diameter (wider than a common dinner plate).

* Use of 65 nanometer or smaller chip manufacturing process. Intel, IBM, NEC, and AMD are using 45 nanometers for their CPU chips. IBM and AMD are in development of a 45 nm process using immersion lithography.

* Copper interconnects where copper wiring replaces aluminum for interconnects.

* Low-K dielectric insulators.

* Silicon on insulator (SOI)

* Strained silicon in a process used by IBM known as strained silicon directly on insulator (SSDOI)

[edit] Packaging

Main article: Integrated circuit packaging

Early USSR made integrated circuit

The earliest integrated circuits were packaged in ceramic flat packs, which continued to be used by the military for their reliability and small size for many years. Commercial circuit packaging quickly moved to the dual in-line package (DIP), first in ceramic and later in plastic. In the 1980s pin counts of VLSI circuits exceeded the practical limit for DIP packaging, leading to pin grid array (PGA) and leadless chip carrier (LCC) packages. Surface mount packaging appeared in the early 1980s and became popular in the late 1980s, using finer lead pitch with leads formed as either gull-wing or J-lead, as exemplified by small-outline integrated circuit -- a carrier which occupies an area about 30 – 50% less than an equivalent DIP, with a typical thickness that is 70% less. This package has "gull wing" leads protruding from the two long sides and a lead spacing of 0.050 inches.

Small-outline integrated circuit (SOIC) and PLCC packages. In the late 1990s, PQFP and TSOP packages became the most common for high pin count devices, though PGA packages are still often used for high-end microprocessors. Intel and AMD are currently transitioning from PGA packages on high-end microprocessors to land grid array (LGA) packages.

Ball grid array (BGA) packages have existed since the 1970s. Flip-chip Ball Grid Array packages, which allow for much higher pin count than other package types, were developed in the 1990s. In an FCBGA package the die is mounted upside-down (flipped) and connects to the package balls via a package substrate that is similar to a printed-circuit board rather than by wires. FCBGA packages allow an array of input-output signals (called Area-I/O) to be distributed over the entire die rather than being confined to the die periphery.

Traces out of the die, through the package, and into the printed circuit board have very different electrical properties, compared to on-chip signals. They require special design techniques and need much more electric power than signals confined to the chip itself.

When multiple dies are put in one package, it is called SiP, for System In Package. When multiple dies are combined on a small substrate, often ceramic, it's called an MCM, or Multi-Chip Module. The boundary between a big MCM and a small printed circuit board is sometimes fuzzy.

[edit] Legal protection of semiconductor chip layouts

Main article: Semiconductor Chip Protection Act of 1984

Prior to 1984, it was not necessarily illegal to produce a competing chip with an identical layout. As the legislative history for the Semiconductor Chip Protection Act of 1984, or SCPA, explained, patent and copyright protection for chip layouts, or topographies, were largely unavailable. This led to considerable complaint by U.S. chip manufacturers--notably, Intel, which took the lead in seeking legislation, along with the Semiconductor Industry Association (SIA)--against what they termed "chip piracy."

A 1984 addition to US law, the SCPA, made all so-called mask works (i.e., chip topographies) protectable if registered with the U.S. Copyright Office. Similar rules apply in most other countries that manufacture ICs. (This is a simplified explanation - see SPCA for legal details.)

[edit] Other developments

In the 1980s programmable integrated circuits were developed. These devices contain circuits whose logical function and connectivity can be programmed by the user, rather than being fixed by the integrated circuit manufacturer. This allows a single chip to be programmed to implement different LSI-type functions such as logic gates, adders, and registers. Current devices named FPGAs (Field Programmable Gate Arrays) can now implement tens of thousands of LSI circuits in parallel and operate up to 550 MHz.

The techniques perfected by the integrated circuits industry over the last three decades have been used to create microscopic machines, known as MEMS. These devices are used in a variety of commercial and military applications. Example commercial applications include DLP projectors, inkjet printers, and accelerometers used to deploy automobile airbags.

In the past, radios could not be fabricated in the same low-cost processes as microprocessors. But since 1998, a large number of radio chips have been developed using CMOS processes. Examples include Intel's DECT cordless phone, or Atheros's 802.11 card.

Future developments seem to follow the multi-microprocessor paradigm, already used by the Intel and AMD dual-core processors. Intel recently unveiled a prototype, "not for commercial sale" chip that bears a staggering 80 microprocessors. Each core is capable of handling its own task independently of the others. This is in response to the heat-versus-speed limit that is about to be reached using existing transistor technology. This design provides a new challenge to chip programming. X10 is the new open-source programming language designed to assist with this task. [11]

[edit] Silicon graffiti

Ever since ICs were created, some chip designers have used the silicon surface area for surreptitious, non-functional images or words. These are sometimes referred to as Chip Art, Silicon Art, Silicon Graffiti or Silicon Doodling. For an overview of this practice, see the article The Secret Art of Chip Graffiti, from the IEEE magazine Spectrum and the Silicon Zoo.

[edit] Key industrial and academic data

The lists in this article may contain items that are not notable, encyclopedic, or helpful. Please help out by removing such elements and incorporating appropriate items into the main body of the article. (January 2008)

[edit] Notable ICs

* The 555 common multivibrator sub-circuit (common in electronic timing circuits)

* The 741 operational amplifier

* 7400 series TTL logic building blocks

* 4000 series, the CMOS counterpart to the 7400 series

* Intel 4004, the world's first microprocessor

* The MOS Technology 6502 and Zilog Z80 microprocessors, used in many home computers of the early 1980s

Programming language

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Programming language

lists

* Alphabetical

* Categorical

* Chronological

* Generational

A programming language is a machine-readable artificial language designed to express computations that can be performed by a machine, particularly a computer. Programming languages can be used to create programs that specify the behavior of a machine, to express algorithms precisely, or as a mode of human communication.

Many programming languages have some form of written specification of their syntax and semantics, since computers require precisely defined instructions. Some (such as C) are defined by a specification document (for example, an ISO Standard), while others (such as Perl) have a dominant implementation.

The earliest programming languages predate the invention of the computer, and were used to direct the behavior of machines such as automated looms and player pianos. Thousands of different programming languages have been created, mainly in the computer field,[1] with many more being created every year.

Contents

[hide]

* 1 Definitions

* 2 Usage

* 3 Elements

o 3.1 Syntax

o 3.2 Static semantics

o 3.3 Type system

+ 3.3.1 Typed versus untyped languages

+ 3.3.2 Static versus dynamic typing

+ 3.3.3 Weak and strong typing

o 3.4 Execution semantics

o 3.5 Core library

* 4 Practice

o 4.1 Specification

o 4.2 Implementation

* 5 History

o 5.1 Early developments

o 5.2 Refinement

o 5.3 Consolidation and growth

o 5.4 Measuring language usage

* 6 Taxonomies

* 7 See also

* 8 References

* 9 Further reading

* 10 External links

[edit] Definitions

Traits often considered important for constituting a programming language:

* Function: A programming language is a language used to write computer programs, which involve a computer performing some kind of computation[2] or algorithm and possibly control external devices such as printers, robots,[3] and so on.

* Target: Programming languages differ from natural languages in that natural languages are only used for interaction between people, while programming languages also allow humans to communicate instructions to machines. Some programming languages are used by one device to control another. For example PostScript programs are frequently created by another program to control a computer printer or display.

* Constructs: Programming languages may contain constructs for defining and manipulating data structures or controlling the flow of execution.

* Expressive power: The theory of computation classifies languages by the computations they are capable of expressing. All Turing complete languages can implement the same set of algorithms. ANSI/ISO SQL and Charity are examples of languages that are not Turing complete, yet often called programming languages.[4][5]

Some authors restrict the term "programming language" to those languages that can express all possible algorithms;[6] sometimes the term "computer language" is used for more limited artificial languages.

Non-computational languages, such as markup languages like HTML or formal grammars like BNF, are usually not considered programming languages. A programming language (which may or may not be Turing complete) may be embedded in these non-computational (host) languages.

[edit] Usage

A programming language provides a structured mechanism for defining pieces of data, and the operations or transformations that may be carried out automatically on that data. A programmer uses the abstractions present in the language to represent the concepts involved in a computation. These concepts are represented as a collection of the simplest elements available (called primitives). [7]

Programming languages differ from most other forms of human expression in that they require a greater degree of precision and completeness. When using a natural language to communicate with other people, human authors and speakers can be ambiguous and make small errors, and still expect their intent to be understood. However, figuratively speaking, computers "do exactly what they are told to do", and cannot "understand" what code the programmer intended to write. The combination of the language definition, a program, and the program's inputs must fully specify the external behavior that occurs when the program is executed, within the domain of control of that program.

Programs for a computer might be executed in a batch process without human interaction, or a user might type commands in an interactive session of an interpreter. In this case the "commands" are simply programs, whose execution is chained together. When a language is used to give commands to a software application (such as a shell) it is called a scripting language[citation needed].

Many languages have been designed from scratch, altered to meet new needs, combined with other languages, and eventually fallen into disuse. Although there have been attempts to design one "universal" computer language that serves all purposes, all of them have failed to be generally accepted as filling this role.[8] The need for diverse computer languages arises from the diversity of contexts in which languages are used:

* Programs range from tiny scripts written by individual hobbyists to huge systems written by hundreds of programmers.

* Programmers range in expertise from novices who need simplicity above all else, to experts who may be comfortable with considerable complexity.

* Programs must balance speed, size, and simplicity on systems ranging from microcontrollers to supercomputers.

* Programs may be written once and not change for generations, or they may undergo nearly constant modification.

* Finally, programmers may simply differ in their tastes: they may be accustomed to discussing problems and expressing them in a particular language.

One common trend in the development of programming languages has been to add more ability to solve problems using a higher level of abstraction. The earliest programming languages were tied very closely to the underlying hardware of the computer. As new programming languages have developed, features have been added that let programmers express ideas that are more remote from simple translation into underlying hardware instructions. Because programmers are less tied to the complexity of the computer, their programs can do more computing with less effort from the programmer. This lets them write more functionality per time unit.[9]

Natural language processors have been proposed as a way to eliminate the need for a specialized language for programming. However, this goal remains distant and its benefits are open to debate. Edsger Dijkstra took the position that the use of a formal language is essential to prevent the introduction of meaningless constructs, and dismissed natural language programming as "foolish".[10] Alan Perlis was similarly dismissive of the idea.[11]

[edit] Elements

All programming languages have some primitive building blocks for the description of data and the processes or transformations applied to them (like the addition of two numbers or the selection of an item from a collection). These primitives are defined by syntactic and semantic rules which describe their structure and meaning respectively.

[edit] Syntax

Parse tree of Python code with inset tokenization

Syntax highlighting is often used to aid programmers in recognizing elements of source code. The language above is Python.

For more details on this topic, see Syntax (programming languages).

A programming language's surface form is known as its syntax. Most programming languages are purely textual; they use sequences of text including words, numbers, and punctuation, much like written natural languages. On the other hand, there are some programming languages which are more graphical in nature, using visual relationships between symbols to specify a program.

The syntax of a language describes the possible combinations of symbols that form a syntactically correct program. The meaning given to a combination of symbols is handled by semantics (either formal or hard-coded in a reference implementation). Since most languages are textual, this article discusses textual syntax.

Programming language syntax is usually defined using a combination of regular expressions (for lexical structure) and Backus-Naur Form (for grammatical structure). Below is a simple grammar, based on Lisp:

expression ::= atom | list

atom ::= number | symbol

number ::= [+-]?['0'-'9']+

symbol ::= ['A'-'Z''a'-'z'].*

list ::= '(' expression* ')'

This grammar specifies the following:

* an expression is either an atom or a list;

* an atom is either a number or a symbol;

* a number is an unbroken sequence of one or more decimal digits, optionally preceded by a plus or minus sign;

* a symbol is a letter followed by zero or more of any characters (excluding whitespace); and

* a list is a matched pair of parentheses, with zero or more expressions inside it.

The following are examples of well-formed token sequences in this grammar: '12345', '()', '(a b c232 (1))'

Not all syntactically correct programs are semantically correct. Many syntactically correct programs are nonetheless ill-formed, per the language's rules; and may (depending on the language specification and the soundness of the implementation) result in an error on translation or execution. In some cases, such programs may exhibit undefined behavior. Even when a program is well-defined within a language, it may still have a meaning that is not intended by the person who wrote it.

Using natural language as an example, it may not be possible to assign a meaning to a grammatically correct sentence or the sentence may be false:

* "Colorless green ideas sleep furiously." is grammatically well-formed but has no generally accepted meaning.

* "John is a married bachelor." is grammatically well-formed but expresses a meaning that cannot be true.

The following C language fragment is syntactically correct, but performs an operation that is not semantically defined (because p is a null pointer, the operations p->real and p->im have no meaning):

complex *p = NULL;

complex abs_p = sqrt (p->real * p->real + p->im * p->im);

The grammar needed to specify a programming language can be classified by its position in the Chomsky hierarchy. The syntax of most programming languages can be specified using a Type-2 grammar, i.e., they are context-free grammars.[12]

[edit] Static semantics

The static semantics defines restrictions on the structure of valid texts that are hard or impossible to express in standard syntactic formalisms.[13] The most important of these restrictions are covered by type systems.

[edit] Type system

Main articles: Type system and Type safety

A type system defines how a programming language classifies values and expressions into types, how it can manipulate those types and how they interact. This generally includes a description of the data structures that can be constructed in the language. The design and study of type systems using formal mathematics is known as type theory.

[edit] Typed versus untyped languages

A language is typed if the specification of every operation defines types of data to which the operation is applicable, with the implication that it is not applicable to other types.[14] For example, "this text between the quotes" is a string. In most programming languages, dividing a number by a string has no meaning. Most modern programming languages will therefore reject any program attempting to perform such an operation. In some languages, the meaningless operation will be detected when the program is compiled ("static" type checking), and rejected by the compiler, while in others, it will be detected when the program is run ("dynamic" type checking), resulting in a runtime exception.

A special case of typed languages are the single-type languages. These are often scripting or markup languages, such as Rexx or SGML, and have only one data type—most commonly character strings which are used for both symbolic and numeric data.

In contrast, an untyped language, such as most assembly languages, allows any operation to be performed on any data, which are generally considered to be sequences of bits of various lengths.[14] High-level languages which are untyped include BCPL and some varieties of Forth.

In practice, while few languages are considered typed from the point of view of type theory (verifying or rejecting all operations), most modern languages offer a degree of typing.[14] Many production languages provide means to bypass or subvert the type system.

[edit] Static versus dynamic typing

In static typing all expressions have their types determined prior to the program being run (typically at compile-time). For example, 1 and (2+2) are integer expressions; they cannot be passed to a function that expects a string, or stored in a variable that is defined to hold dates.[14]

Statically-typed languages can be either manifestly typed or type-inferred. In the first case, the programmer must explicitly write types at certain textual positions (for example, at variable declarations). In the second case, the compiler infers the types of expressions and declarations based on context. Most mainstream statically-typed languages, such as C++, C# and Java, are manifestly typed. Complete type inference has traditionally been associated with less mainstream languages, such as Haskell and ML. However, many manifestly typed languages support partial type inference; for example, Java and C# both infer types in certain limited cases.[15]

Dynamic typing, also called latent typing, determines the type-safety of operations at runtime; in other words, types are associated with runtime values rather than textual expressions.[14] As with type-inferred languages, dynamically typed languages do not require the programmer to write explicit type annotations on expressions. Among other things, this may permit a single variable to refer to values of different types at different points in the program execution. However, type errors cannot be automatically detected until a piece of code is actually executed, making debugging more difficult. Ruby, Lisp, JavaScript, and Python are dynamically typed.

[edit] Weak and strong typing

Weak typing allows a value of one type to be treated as another, for example treating a string as a number.[14] This can occasionally be useful, but it can also allow some kinds of program faults to go undetected at compile time and even at run time.

Strong typing prevents the above. An attempt to perform an operation on the wrong type of value raises an error.[14] Strongly-typed languages are often termed type-safe or safe.

An alternative definition for "weakly typed" refers to languages, such as Perl and JavaScript, which permit a large number of implicit type conversions. In JavaScript, for example, the expression 2 * x implicitly converts x to a number, and this conversion succeeds even if x is null, undefined, an Array, or a string of letters. Such implicit conversions are often useful, but they can mask programming errors.

Strong and static are now generally considered orthogonal concepts, but usage in the literature differs. Some use the term strongly typed to mean strongly, statically typed, or, even more confusingly, to mean simply statically typed. Thus C has been called both strongly typed and weakly, statically typed.[16][17]

[edit] Execution semantics

Once data has been specified, the machine must be instructed to perform operations on the data. The execution semantics of a language defines how and when the various constructs of a language should produce a program behavior.

For example, the semantics may define the strategy by which expressions are evaluated to values, or the manner in which control structures conditionally execute statements.

[edit] Core library

For more details on this topic, see Standard library.

Most programming languages have an associated core library (sometimes known as the 'Standard library', especially if it is included as part of the published language standard), which is conventionally made available by all implementations of the language. Core libraries typically include definitions for commonly used algorithms, data structures, and mechanisms for input and output.

A language's core library is often treated as part of the language by its users, although the designers may have treated it as a separate entity. Many language specifications define a core that must be made available in all implementations, and in the case of standardized languages this core library may be required. The line between a language and its core library therefore differs from language to language. Indeed, some languages are designed so that the meanings of certain syntactic constructs cannot even be described without referring to the core library. For example, in Java, a string literal is defined as an instance of the java.lang.String class; similarly, in Smalltalk, an anonymous function expression (a "block") constructs an instance of the library's BlockContext class. Conversely, Scheme contains multiple coherent subsets that suffice to construct the rest of the language as library macros, and so the language designers do not even bother to say which portions of the language must be implemented as language constructs, and which must be implemented as parts of a library.

[edit] Practice

A language's designers and users must construct a number of artifacts that govern and enable the practice of programming. The most important of these artifacts are the language specification and implementation.

[edit] Specification

For more details on this topic, see Programming language specification.

The specification of a programming language is intended to provide a definition that the language users and the implementors can use to determine whether the behavior of a program is correct, given its source code.

A programming language specification can take several forms, including the following:

* An explicit definition of the syntax, static semantics, and execution semantics of the language. While syntax is commonly specified using a formal grammar, semantic definitions may be written in natural language (e.g., the C language), or a formal semantics (e.g., the Standard ML[18] and Scheme[19] specifications).

* A description of the behavior of a translator for the language (e.g., the C++ and Fortran specifications). The syntax and semantics of the language have to be inferred from this description, which may be written in natural or a formal language.

* A reference or model implementation, sometimes written in the language being specified (e.g., Prolog or ANSI REXX[20]). The syntax and semantics of the language are explicit in the behavior of the reference implementation.

[edit] Implementation

For more details on this topic, see Programming language implementation.

An implementation of a programming language provides a way to execute that program on one or more configurations of hardware and software. There are, broadly, two approaches to programming language implementation: compilation and interpretation. It is generally possible to implement a language using either technique.

The output of a compiler may be executed by hardware or a program called an interpreter. In some implementations that make use of the interpreter approach there is no distinct boundary between compiling and interpreting. For instance, some implementations of the BASIC programming language compile and then execute the source a line at a time.

Programs that are executed directly on the hardware usually run several orders of magnitude faster than those that are interpreted in software.[citation needed]

One technique for improving the performance of interpreted programs is just-in-time compilation. Here the virtual machine, just before execution, translates the blocks of bytecode which are going to be used to machine code, for direct execution on the hardware.

[edit] History

A selection of textbooks that teach programming, in languages both popular and obscure. These are only a few of the thousands of programming languages and dialects that have been designed in history.

For more details on this topic, see History of programming languages.

[edit] Early developments

The first programming languages predate the modern computer. The 19th century had "programmable" looms and player piano scrolls which implemented what are today recognized as examples of domain-specific programming languages. By the beginning of the twentieth century, punch cards encoded data and directed mechanical processing. In the 1930s and 1940s, the formalisms of Alonzo Church's lambda calculus and Alan Turing's Turing machines provided mathematical abstractions for expressing algorithms; the lambda calculus remains influential in language design.[21]

In the 1940s, the first electrically powered digital computers were created. The first high-level programming language to be designed for a computer was Plankalkül, developed for the German Z3 by Konrad Zuse between 1943 and 1945.

Programmers of early 1950s computers, notably UNIVAC I and IBM 701, used machine language programs, that is, the first generation language (1GL). 1GL programming was quickly superseded by similarly machine-specific, but mnemonic, second generation languages (2GL) known as assembly languages or "assembler". Later in the 1950s, assembly language programming, which had evolved to include the use of macro instructions, was followed by the development of "third generation" programming languages (3GL), such as FORTRAN, LISP, and COBOL. 3GLs are more abstract and are "portable", or at least implemented similar on computers that do not support the same native machine code. Updated versions of all of these 3GLs are still in general use, and each has strongly influenced the development of later languages.[22] At the end of the 1950s, the language formalized as Algol 60 was introduced, and most later programming languages are, in many respects, descendants of Algol.[22] The format and use of the early programming languages was heavily influenced by the constraints of the interface.[23]

[edit] Refinement

The period from the 1960s to the late 1970s brought the development of the major language paradigms now in use, though many aspects were refinements of ideas in the very first Third-generation programming languages:

* APL introduced array programming and influenced functional programming.[24]

* PL/I (NPL) was designed in the early 1960s to incorporate the best ideas from FORTRAN and COBOL.

* In the 1960s, Simula was the first language designed to support object-oriented programming; in the mid-1970s, Smalltalk followed with the first "purely" object-oriented language.

* C was developed between 1969 and 1973 as a systems programming language, and remains popular.[25]

* Prolog, designed in 1972, was the first logic programming language.

* In 1978, ML built a polymorphic type system on top of Lisp, pioneering statically typed functional programming languages.

Each of these languages spawned an entire family of descendants, and most modern languages count at least one of them in their ancestry.

The 1960s and 1970s also saw considerable debate over the merits of structured programming, and whether programming languages should be designed to support it.[26] Edsger Dijkstra, in a famous 1968 letter published in the Communications of the ACM, argued that GOTO statements should be eliminated from all "higher level" programming languages.[27]

The 1960s and 1970s also saw expansion of techniques that reduced the footprint of a program as well as improved productivity of the programmer and user. The card deck for an early 4GL was a lot smaller for the same functionality expressed in a 3GL deck.

[edit] Consolidation and growth

The 1980s were years of relative consolidation. C++ combined object-oriented and systems programming. The United States government standardized Ada, a systems programming language intended for use by defense contractors. In Japan and elsewhere, vast sums were spent investigating so-called "fifth generation" languages that incorporated logic programming constructs.[28] The functional languages community moved to standardize ML and Lisp. Rather than inventing new paradigms, all of these movements elaborated upon the ideas invented in the previous decade.

One important trend in language design during the 1980s was an increased focus on programming for large-scale systems through the use of modules, or large-scale organizational units of code. Modula-2, Ada, and ML all developed notable module systems in the 1980s, although other languages, such as PL/I, already had extensive support for modular programming. Module systems were often wedded to generic programming constructs.[29]

The rapid growth of the Internet in the mid-1990s created opportunities for new languages. Perl, originally a Unix scripting tool first released in 1987, became common in dynamic Web sites. Java came to be used for server-side programming. These developments were not fundamentally novel, rather they were refinements to existing languages and paradigms, and largely based on the C family of programming languages.

Programming language evolution continues, in both industry and research. Current directions include security and reliability verification, new kinds of modularity (mixins, delegates, aspects), and database integration.[citation needed]

The 4GLs are examples of languages which are domain-specific, such as SQL, which manipulates and returns sets of data rather than the scalar values which are canonical to most programming languages. Perl, for example, with its 'here document' can hold multiple 4GL programs, as well as multiple JavaScript programs, in part of its own perl code and use variable interpolation in the 'here document' to support multi-language programming.[30]

[edit] Measuring language usage

Main article: Measuring programming language popularity

It is difficult to determine which programming languages are most widely used, and what usage means varies by context. One language may occupy the greater number of programmer hours, a different one have more lines of code, and a third utilize the most CPU time. Some languages are very popular for particular kinds of applications. For example, COBOL is still strong in the corporate data center, often on large mainframes; FORTRAN in engineering applications; C in embedded applications and operating systems; and other languages are regularly used to write many different kinds of applications.

Various methods of measuring language popularity, each subject to a different bias over what is measured, have been proposed:

* counting the number of job advertisements that mention the language[31]

* the number of books sold that teach or describe the language[32]

* estimates of the number of existing lines of code written in the language—which may underestimate languages not often found in public searches[33]

* counts of language references (i.e., to the name of the language) found using a web search engine.

Combining and averaging information from various internet sites, claims that [34] in 2008 the 10 most cited programming languages are (in alphabetical order): C, C++, C#, Java, JavaScript, Perl, PHP, Python, Ruby, and SQL.

[edit] Taxonomies

For more details on this topic, see Categorical list of programming languages.

There is no overarching classification scheme for programming languages. A given programming language does not usually have a single ancestor language. Languages commonly arise by combining the elements of several predecessor languages with new ideas in circulation at the time. Ideas that originate in one language will diffuse throughout a family of related languages, and then leap suddenly across familial gaps to appear in an entirely different family.

The task is further complicated by the fact that languages can be classified along multiple axes. For example, Java is both an object-oriented language (because it encourages object-oriented organization) and a concurrent language (because it contains built-in constructs for running multiple threads in parallel). Python is an object-oriented scripting language.

In broad strokes, programming languages divide into programming paradigms and a classification by intended domain of use. Paradigms include procedural programming, object-oriented programming, functional programming, and logic programming; some languages are hybrids of paradigms or multi-paradigmatic. An assembly language is not so much a paradigm as a direct model of an underlying machine architecture. By purpose, programming languages might be considered general purpose, system programming languages, scripting languages, domain-specific languages, or concurrent/distributed languages (or a combination of these).[35] Some general purpose languages were designed largely with educational goals.[36]

A programming language may also be classified by factors unrelated to programming paradigm. For instance, most programming languages use English language keywords, while a minority do not. Other languages may be classified as being esoteric or not.

[edit] See also

Computer science portal

Sister project Wikibooks has a book on the topic of

Computer programming

Look up programming language in Wiktionary, the free dictionary.

* Computer programming

* Lists of programming languages

* Comparison of programming languages

* Comparison of basic instructions of programming languages

* Educational programming language

* Invariant based programming

* Literate programming

* Programming language dialect

* Programming language theory

* Pseudocode

* Computer science and List of basic computer science topics

* Software engineering and List of software engineering topics

This article needs additional citations for verification. Please help improve this article by adding reliable references (ideally, using inline citations). Unsourced material may be challenged and removed. (August 2007)

This article is about the device used in electronics prototyping. For the device used in optics labs, see optical breadboards. For the food preparation utensil, see Cutting board.

A breadboard with a completed circuit

This 1920s TRF radio manufactured by Signal is constructed on a breadboard

A breadboard (solderless breadboard, protoboard, plugboard) is a reusable sometimes[1] solderless device used to build a (generally temporary) prototype of an electronic circuit and for experimenting with circuit designs. This is in contrast to stripboard (veroboard) and similar prototyping printed circuit boards, which are used to build more permanent soldered prototypes or one-offs, and cannot easily be reused. A variety of electronic systems may be prototyped by using breadboards, from small analog and digital circuits to complete central processing units (CPUs).

The term breadboard is derived from an early form of point-to-point construction: in particular, the practice of constructing simple circuits (usually using valves/tubes) on a convenient wooden base, similar to a cutting board like the kind used for slicing bread with a knife. It can also be viewed as a bread with a large number of pores (holes for connection); like the bread most commonly used in America and Europe, a modern prototyping board is typically white or off-white.

A binary counter wired up on a large breadboard

The hole pattern for a typical etched prototyping PCB (printed circuit board) is similar to the node pattern of the breadboards shown above.

Contents

[hide]

* 1 Evolution

* 2 Typical specifications

* 3 Bus and terminal strips

o 3.1 Diagram

* 4 Jump wires

* 5 Advanced breadboards

* 6 Limitations

* 7 Alternatives

* 8 See also

* 9 References

* 10 External links

[edit] Evolution

Over time, breadboards have evolved greatly, with the term being used for all kinds of prototype electronic devices. For example, US Patent 3,145,483[2], filed in 1961 and granted in 1964, describes a wooden plate breadboard with mounted springs and other facilities. Six years later, US Patent 3,496,419[3], granted in 1970 after a 1967 filing, refers to a particular printed circuit board layout as a Printed Circuit Breadboard. Both examples also refer to and describe other types of breadboards as prior art. The classic, usually white, plastic pluggable breadboard, illustrated in this article, was designed by Ronald J Portugal of EI Instruments Inc. in 1971[4].

In the early days of radio, amateurs would nail bare copper wires or terminal strips to a wooden board (often literally a board for cutting bread) and solder electronic components to them.[5]. Sometimes a paper schematic diagram was first glued to the board as a guide to placing terminals, then components and wires were installed over their symbols on the schematic.

The integrated circuit for the Polaroid SX-70 camera was breadboarded before Texas Instruments fabricated the custom chip. It was rumored to have been built from discrete components on a 4 ft. x 8 ft. piece of plywood, and was fully functional.[citation needed]>

[edit] Typical specifications

A modern solderless breadboard consists of a perforated block of plastic with numerous tin plated phosphor bronze spring clips under the perforations. The spacing between the clips (lead pitch) is typically 0.1" (2.54 mm). Integrated circuits (ICs) in dual in-line packages (DIPs) can be inserted to straddle the centerline of the block. Interconnecting wires and the leads of discrete components (such as capacitors, resistors, inductors, etc.) can be inserted into the remaining free holes to complete the circuit. Where ICs are not used, discrete components and connecting wires may use any of the holes. Typically the spring clips are rated for 1 Ampere at 5 Volts and 0.333 Amperes at 15 Volts (5 Watts).

[edit] Bus and terminal strips

Logical 4-bits adder where sums are linked to LEDs on a typical breadboard.

Example breadboard drawing. Two bus strips and one terminal strip in one block. 25 consecutive terminals in a bus strip connected (indicated by gaps in the red and blue lines). Four binding posts depicted at the top.

Close-up of a solderless breadboard. An IC straddling the centerline is probed with an oscilloscope.

Solderless breadboards are available from several different manufacturers, but most share a similar layout. The layout of a typical solderless breadboard is made up from two types of areas, called strips. Strips consist of interconnected electrical terminals.

terminal strips

The main area, to hold most of the electronic components.

In the middle of a terminal strip of a breadboard, one typically finds a notch running in parallel to the long side. The notch is to mark the centerline of the terminal strip and provides limited airflow (cooling) to DIP ICs straddling the centerline. The clips on the right and left of the notch are each connected in a radial way; typically five clips (i.e., beneath five holes) in a row on each side of the notch are electrically connected. The five clip columns on the left of the notch are often marked as A, B, C, D, and E, while the ones on the right are marked F, G, H, I and J. When a "skinny" Dual Inline Pin package (DIP) integrated circuit (such as a typical DIP-14 or DIP-16, which have a 0.3 inch separation between the pin rows) is plugged into a breadboard, the pins of one side of the chip are supposed to go into column E while the pins of the other side go into column F on the other side of the notch.

bus strips

To provide power to the electronic components.

A bus strip usually contains two columns, one for ground, one for a supply voltage. But some breadboards only provide a single-column power distributions bus strip on each long side. Typically the column intended for a supply voltage is marked in red, while the column for ground is marked in blue or black. Some manufacturers connect all terminals in a column. Others just connect groups of e.g. 25 consecutive terminals in a column. The latter design provides a circuit designer with some more control over crosstalk (inductively coupled noise) on the power supply bus. Often the groups in a bus strip are indicated by gaps in the color marking.

Bus strips typically run down one or both sides of a terminal strip or between terminal strips. On large breadboards additional bus strips can often be found on the top and bottom of terminal strips.

Some manufacturers provide separate bus and terminal strips. Others just provide breadboard blocks which contain both in one block. Often breadboard strips or blocks of one brand can be clipped together to make a larger breadboard.

In a more robust and slightly easier to handle variant, one or more breadboard strips are mounted on a sheet of metal. Typically, that backing sheet also holds a number of binding posts. These posts provide a clean way to connect an external power supply. Several images in this article show such solderless breadboards.

[edit] Diagram

A "full size" terminal breadboard strip typically consists of around 56 to 65 rows of connectors, each row containing the above mentioned two sets of connected clips (A to E and F to J). "Small size" strips typically come with around 30 rows.

Terminal Strip:

A B C D E F G H I J

1 o-o-o-o-o v o-o-o-o-o

2 o-o-o-o-o o-o-o-o-o

3 o-o-o-o-o o-o-o-o-o

~

~

61 o-o-o-o-o o-o-o-o-o

62 o-o-o-o-o o-o-o-o-o

63 o-o-o-o-o ^ o-o-o-o-o

Bus Strip:

Jump wires

The jump wires for breadboarding can be obtained in ready-to-use jump wire sets or can be manually manufactured. The latter can become tedious work for larger circuits. Ready-to-use jump wires come in different qualities, some even with tiny plugs attached to the wire ends. Jump wire material for ready-made or home-made wires should usually be 22 AWG (0.33 mm²) solid copper, tin-plated wire - assuming no tiny plugs are to be attached to the wire ends. The wire ends should be stripped 3/16" to 5/16" (approx. 5 mm to 8 mm). Shorter stripped wires might result in bad contact with the board's spring clips (insulation being caught in the springs). Longer stripped wires increase the likelihood of short-circuits on the board. Needle-nose pliers and tweezers are helpful when inserting or removing wires, particularly on crowded boards.

Differently colored wires and color coding discipline are often adhered to for consistency. However, the number of available colors is typically far less than the number of signal types or paths. So typically a few wire colors get reserved for the supply voltages and ground (e.g. red, blue, black), some more for main signals, while the rest often get random colors. There are ready-to-use jump wire sets on the market where the color indicates the length of the wires; however, these sets do not allow applying a meaningful color coding schema.

[edit] Advanced breadboards

Some manufacturers provide high-end versions of solderless breadboards. These are typically high-quality breadboard modules mounted on some flat casing. The casing contains useful equipment for breadboarding, for example one or more power supplies, signal generators, serial interfaces, LED or LCD display modules, logic probes, etc.

Breadboard modules can also be found mounted on devices like microcontroller evaluation boards. They provide an easy way to add additional periphery circuits to the evaluation board.

[edit] Limitations

An example of a complex circuit built on a breadboard. The circuit is an Intel 8088 single board computer.

Due to large stray capacitance (from 2-25pF per contact point), high inductance of some connections and a relatively high and not very reproducible contact resistance, solderless breadboards are limited to operate at relatively low frequencies, usually less than 10 MHz, depending on the nature of the circuit. The relative high contact resistance can already be a problem for DC and very low frequency circuits. Solderless breadboards are further limited by their voltage and current ratings.

Breadboards usually cannot accommodate Surface mount technology devices (SMD) or non 0.1" (2.54 mm) grid spaced components, like for example those with 2 mm spacing. Further, they can not accommodate components with multiple rows of connectors, if these connectors don't match the DIL layout (impossible to provide correct electrical connectivity). Sometimes small PCB adapters (breakout adapters) can be used to fit the component on. Such adapters carry one or more of the non-fitting components and 0.1" (2.54 mm) connectors in DIL layout. The larger of the components are usually plugged into a socket, where the socket was soldered onto such an adapter. The smaller components (e.g. SMD resistors) are usually directly soldered onto such an adapter. The adapter is then plugged into the breadboard via the 0.1" connectors. However, the need to solder the component or socket onto the adapter contradicts the idea of using a solderless breadboard for prototyping in the first place.

Complex circuits can become unmanageable on a breadboard due to the large amount of wiring necessary.

[edit] Alternatives

Alternative methods to create prototypes are point-to-point construction, reminiscent of the original breadboards, wire wrap, wiring pencil, and boards like stripboard. Complicated systems, such as modern computers comprising millions of transistors, diodes and resistors, do not lend themselves to prototyping using breadboards, as sprawling designs on breadboards can be difficult to lay out and debug. Modern circuit designs are generally developed using a schematic capture and simulation system, and tested in software simulation before the first prototype circuits are built on a printed circuit board. Integrated circuit designs are a more extreme version of the same process: since producing prototype silicon is expensive, extensive software simulations are performed before fabricating the first prototypes. However, prototyping techniques are still used for some applications such as RF circuits, or where software models of components are inexact or incomplete.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download