Secondary Memory - MEBLE DO SALONU
Secondary Memory
Secondary memory is where programs and data are kept on a long-term basis. Common secondary storage devices are the hard disk and floppy disks.
• The hard disk has enormous storage capacity compared to main memory.
• The hard disk is usually contained in the systems unit of a computer.
• The hard disk is used for long-term storage of programs and data.
• Data and programs on the hard disk are organized into files--named sections of the disk.
A hard disk might have a storage capacity of 40 gigabytes. This is about 300 times the amount of storage in main memory (assuming 128 megabytes of main memory.) However, a hard disk is very slow compared to main memory. The reason for having two types of storage is this contrast:
|Primary memory |Secondary memory |
|Fast |Slow |
|Expensive |Cheap |
|Low capacity |Large capacity |
|Connects directly to the processor |Not connected directly to the processor |
Floppy disks are mostly used for transferring software between computer systems and for casual backup of software. They have low capacity, and are very, very slow compared to other storage devices.
Secondary memory is not directly accessible to the CPU. Input/output channels are used to access this non volatile memory. This memory does not lose the data when the system is powered off. The most familiar form of secondary memory that is widely used is Hard Disk. Some examples of secondary memory are USB sticks, floppy drives and Zip drives.
1. Purpose of storage
Many different forms of storage, based on various natural phenomena, have been invented. So far, no practical universal storage medium exists, and all forms of storage have some drawbacks. Therefore a computer system usually contains several kinds of storage, each with an individual purpose.
A digital computer represents data using the binary numeral system. Text, numbers, pictures, audio, and nearly any other form of information can be converted into a string of bits, or binary digits, each of which has a value of 1 or 0. The most common unit of storage is the byte, equal to 8 bits. A piece of information can be handled by any computer whose storage space is large enough to accommodate the binary representation of the piece of information, or simply data. For example, using eight million bits, or about one megabyte, a typical computer could store a short novel.
Traditionally the most important part of every computer is the central processing unit (CPU, or simply a processor), because it actually operates on data, performs any calculations, and controls all the other components.
Without a significant amount of memory, a computer would merely be able to perform fixed operations and immediately output the result. It would have to be reconfigured to change its behavior. This is acceptable for devices such as desk calculators or simple digital signal processors. Von Neumann machines differ in that they have a memory in which they store their operating instructions and data. Such computers are more versatile in that they do not need to have their hardware reconfigured for each new program, but can simply be reprogrammed with new in-memory instructions; they also tend to be simpler to design, in that a relatively simple processor may keep state between successive computations to build up complex procedural results. Most modern computers are von Neumann machines.
In practice, almost all computers use a variety of memory types, organized in a storage hierarchy around the CPU, as a trade-off between performance and cost. Generally, the lower a storage is in the hierarchy, the lesser its bandwidth and the greater its access latency is from the CPU. This traditional division of storage to primary, secondary, tertiary and off-line storage is also guided by cost per bit.
2. Hierarchy of storage
[pic]
Various forms of storage, divided according to their distance from the central processing unit. The fundamental components of a general-purpose computer are arithmetic and logic unit, control circuitry, storage space, and input/output devices. Technology and capacity as in common home computers around 2005.
2. 1. Primary storage
Direct links to this section: Primary storage, Main memory, Internal Memory.
Primary storage (or main memory or internal memory), often referred to simply as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in uniform manner.
Historically, early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were mostly replaced by magnetic core memory, which was still rather cumbersome. Undoubtedly, a revolution was started with the invention of a transistor, that soon enabled then-unbelievable miniaturization of electronic memory via solid-state silicon chip technology.
This led to a modern random-access memory (RAM). It is small-sized, light, but quite expensive at the same time. (The particular types of RAM used for primary storage are also volatile, i.e. they lose the information when not powered).
As shown in the diagram, traditionally there are two more sub-layers of the primary storage, besides main large-capacity RAM:
• Processor registers are located inside the processor. Each register typically holds a word of data (often 32 or 64 bits). CPU instructions instruct the arithmetic and logic unit to perform various calculations or other operations on this data (or with the help of it). Registers are technically among the fastest of all forms of computer data storage.
• Processor cache is an intermediate stage between ultra-fast registers and much slower main memory. It's introduced solely to increase performance of the computer. Most actively used information in the main memory is just duplicated in the cache memory, which is faster, but of much lesser capacity. On the other hand it is much slower, but much larger than processor registers. Multi-level hierarchical cache setup is also commonly used—primary cache being smallest, fastest and located inside the processor; secondary cache being somewhat larger and slower.
Main memory is directly or indirectly connected to the central processing unit via a memory bus. It is actually two buses (not on the diagram): an address bus and a data bus. The CPU firstly sends a number through an address bus, a number called memory address, that indicates the desired location of data. Then it reads or writes the data itself using the data bus. Additionally, a memory management unit (MMU) is a small device between CPU and RAM recalculating the actual memory address, for example to provide an abstraction of virtual memory or other tasks.
As the RAM types used for primary storage are volatile (cleared at start up), a computer containing only such storage would not have a source to read instructions from, in order to start the computer. Hence, non-volatile primary storage containing a small startup program (BIOS) is used to bootstrap the computer, that is, to read a larger program from non-volatile secondary storage to RAM and start to execute it. A non-volatile technology used for this purpose is called ROM, for read-only memory (the terminology may be somewhat confusing as most ROM types are also capable of random access).
Many types of "ROM" are not literally read only, as updates are possible; however it is slow and memory must be erased in large portions before it can be re-written. Some embedded systems run programs directly from ROM (or similar), because such programs are rarely changed. Standard computers do not store non-rudimentary programs in ROM, rather use large capacities of secondary storage, which is non-volatile as well, and not as costly.
Recently, primary storage and secondary storage in some uses refer to what was historically called, respectively, secondary storage and tertiary storage. [1]
2. 2. Secondary storage
[pic]
A hard disk drive with protective cover removed.
Secondary storage (or external memory) differs from primary storage in that it is not directly accessible by the CPU. The computer usually uses its input/output channels to access secondary storage and transfers the desired data using intermediate area in primary storage. Secondary storage does not lose the data when the device is powered down—it is non-volatile. Per unit, it is typically also two orders of magnitude less expensive than primary storage. Consequently, modern computer systems typically have two orders of magnitude more secondary storage than primary storage and data is kept for a longer time there.
In modern computers, hard disk drives are usually used as secondary storage. The time taken to access a given byte of information stored on a hard disk is typically a few thousandths of a second, or milliseconds. By contrast, the time taken to access a given byte of information stored in random access memory is measured in billionths of a second, or nanoseconds. This illustrates the very significant access-time difference which distinguishes solid-state memory from rotating magnetic storage devices: hard disks are typically about a million times slower than memory. Rotating optical storage devices, such as CD and DVD drives, have even longer access times. With disk drives, once the disk read/write head reaches the proper placement and the data of interest rotates under it, subsequent data on the track are very fast to access. As a result, in order to hide the initial seek time and rotational latency, data are transferred to and from disks in large contiguous blocks.
When data reside on disk, block access to hide latency offers a ray of hope in designing efficient external memory algorithms. Sequential or block access on disks is orders of magnitude faster than random access, and many sophisticated paradigms have been developed to design efficient algorithms based upon sequential and block access . Another way to reduce the I/O bottleneck is to use multiple disks in parallel in order to increase the bandwidth between primary and secondary memory. [2]
Some other examples of secondary storage technologies are: flash memory (e.g. USB flash drives or keys), floppy disks, magnetic tape, paper tape, punched cards, standalone RAM disks, and Iomega Zip drives.
The secondary storage is often formatted according to a file system format, which provides the abstraction necessary to organize data into files and directories, providing also additional information (called metadata) describing the owner of a certain file, the access time, the access permissions, and other information.
Most computer operating systems use the concept of virtual memory, allowing utilization of more primary storage capacity than is physically available in the system. As the primary memory fills up, the system moves the least-used chunks (pages) to secondary storage devices (to a swap file or page file), retrieving them later when they are needed. As more of these retrievals from slower secondary storage are necessary, the more the overall system performance is degraded.
2. 3. Tertiary storage
[pic]
Large tape library. Tape cartridges placed on shelves in the front, robotic arm moving in the back. Visible height of the library is about 180 cm.
Tertiary storage or tertiary memory, [3] provides a third level of storage. Typically it involves a robotic mechanism which will mount (insert) and dismount removable mass storage media into a storage device according to the system's demands; this data is often copied to secondary storage before use. It is primarily used for archival of rarely accessed information since it is much slower than secondary storage (e.g. 5-60 seconds vs. 1-10 milliseconds). This is primarily useful for extraordinarily large data stores, accessed without human operators. Typical examples include tape libraries and optical jukeboxes.
When a computer needs to read information from the tertiary storage, it will first consult a catalog database to determine which tape or disc contains the information. Next, the computer will instruct a robotic arm to fetch the medium and place it in a drive. When the computer has finished reading the information, the robotic arm will return the medium to its place in the library.
2. 4. Off-line storage
Off-line storage is a computer data storage on a medium or a device that is not under the control of a processing unit. [4] The medium is recorded, usually in a secondary or tertiary storage device, and then physically removed or disconnected. It must be inserted or connected by a human operator before a computer can access it again. Unlike tertiary storage, it cannot be accessed without human interaction.
Off-line storage is used to transfer information, since the detached medium can be easily physically transported. Additionally, in case a disaster, for example a fire, destroys the original data, a medium in a remote location will probably be unaffected, enabling disaster recovery. Off-line storage increases general information security, since it is physically inaccessible from a computer, and data confidentiality or integrity cannot be affected by computer-based attack techniques. Also, if the information stored for archival purposes is accessed seldom or never, off-line storage is less expensive than tertiary storage.
In modern personal computers, most secondary and tertiary storage media are also used for off-line storage. Optical discs and flash memory devices are most popular, and to much lesser extent removable hard disk drives. In enterprise uses, magnetic tape is predominant. Older examples are floppy disks, Zip disks, or punched cards.
3. Characteristics of storage
[pic]
A 1GB DDR RAM memory module (detail)
Storage technologies at all levels of the storage hierarchy can be differentiated by evaluating certain core characteristics as well as measuring characteristics specific to a particular implementation. These core characteristics are volatility, mutability, accessibility, and addressibility. For any particular implementation of any storage technology, the characteristics worth measuring are capacity and performance.
3. 1. Volatility
Non-volatile memory
Will retain the stored information even if it is not constantly supplied with electric power. It is suitable for long-term storage of information. Nowadays used for most of secondary, tertiary, and off-line storage. In 1950s and 1960s, it was also used for primary storage, in the form of magnetic core memory.
Volatile memory
Requires constant power to maintain the stored information. The fastest memory technologies of today are volatile ones (not a universal rule). Since primary storage is required to be very fast, it predominantly uses volatile memory.
3. 2. Differentiation
Dynamic random access memory
A form of volatile memory which also requires the stored information to be periodically re-read and re-written, or refreshed, otherwise it would vanish.
Static memory
A form of volatile memory similar to DRAM with the exception that it never needs to be refreshed as long as power is applied. (It loses its content if power is removed).
3. 3. Mutability
Read/write storage or mutable storage
Allows information to be overwritten at any time. A computer without some amount of read/write storage for primary storage purposes would be useless for many tasks. Modern computers typically use read/write storage also for secondary storage.
Read only storage
Retains the information stored at the time of manufacture, and write once storage (Write Once Read Many) allows the information to be written only once at some point after manufacture. These are called immutable storage. Immutable storage is used for tertiary and off-line storage. Examples include CD-ROM and CD-R.
Slow write, fast read storage
Read/write storage which allows information to be overwritten multiple times, but with the write operation being much slower than the read operation. Examples include CD-RW and flash memory.
3. 4. Accessibility
Random access
Any location in storage can be accessed at any moment in approximately the same amount of time. Such characteristic is well suited for primary and secondary storage.
Sequential access
The accessing of pieces of information will be in a serial order, one after the other; therefore the time to access a particular piece of information depends upon which piece of information was last accessed. Such characteristic is typical of off-line storage.
3. 5. Addressability
Location-addressable
Each individually accessible unit of information in storage is selected with its numerical memory address. In modern computers, location-addressable storage usually limits to primary storage, accessed internally by computer programs, since location-addressability is very efficient, but burdensome for humans.
File addressable
Information is divided into files of variable length, and a particular file is selected with human-readable directory and file names. The underlying device is still location-addressable, but the operating system of a computer provides the file system abstraction to make the operation more understandable. In modern computers, secondary, tertiary and off-line storage use file systems.
Content-addressable
Each individually accessible unit of information is selected based on the basis of (part of) the contents stored there. Content-addressable storage can be implemented using software (computer program) or hardware (computer device), with hardware being faster but more expensive option. Hardware content addressable memory is often used in a computer's CPU cache.
3. 6. Capacity
Raw capacity
The total amount of stored information that a storage device or medium can hold. It is expressed as a quantity of bits or bytes (e.g. 10.4 megabytes).
Memory storage density
The compactness of stored information. It is the storage capacity of a medium divided with a unit of length, area or volume (e.g. 1.2 megabytes per square inch).
3. 7. Performance
Latency
The time it takes to access a particular location in storage. The relevant unit of measurement is typically nanosecond for primary storage, millisecond for secondary storage, and second for tertiary storage. It may make sense to separate read latency and write latency, and in case of sequential access storage, minimum, maximum and average latency.
Throughput
The rate at which information can be read from or written to the storage. In computer data storage, throughput is usually expressed in terms of megabytes per second or MB/s, though bit rate may also be used. As with latency, read rate and write rate may need to be differentiated. Also accessing media sequentially, as opposed to randomly, typically yields maximum throughput.
3. 8. Environmental Impact
The impact of a storage device on the environment.
Energy
• Energy Star certified power adapters for storage devices reduce power consumption 30 percent on average [5]
• Storage devices that reduce fan usage, automatically shut-down during inactivity, and low power hard drives can reduce energy consumption 90 percent. [6]
• 2.5 inch hard disk drives often consume less power than larger ones. [7] [8] Low capacity solid-state drives have no moving parts and consume less power than hard disks. [9] [10] [11] Also, memory may use more power than hard disks. [11]
Recycling
• Some devices are made of recyclable materials like aluminum, bamboo, or plastics
• Easily disassembled devices are easier to recycle if only certain parts are recyclable
• Packaging may be recyclable and some companies print instructions on the box or use recyclable paper for the instructions instead of waxed paper
Manufacturing
• The amount of raw materials (metals, aluminum, plastics, lead) used to manufacture the device
• Excess waste materials and if they are recycled
• Chemicals used in manufacturing
• Shipping distance for the device itself and parts
• Amount of packaging materials and if they are recyclable
4. Fundamental storage technologies
As of 2008, the most commonly used data storage technologies are semiconductor, magnetic, and optical, while paper still sees some limited usage. Some other fundamental storage technologies have also been used in the past or are proposed for development.
4. 1. Semiconductor
Semiconductor memory uses semiconductor-based integrated circuits to store information. A semiconductor memory chip may contain millions of tiny transistors or capacitors. Both volatile and non-volatile forms of semiconductor memory exist. In modern computers, primary storage almost exclusively consists of dynamic volatile semiconductor memory or dynamic random access memory. Since the turn of the century, a type of non-volatile semiconductor memory known as flash memory has steadily gained share as off-line storage for home computers. Non-volatile semiconductor memory is also used for secondary storage in various advanced electronic devices and specialized computers.
4. 2. Magnetic
|• [[|d]] • |
| |
|Magnetic storage media |
| | |
|Wire (1898) • |
|Tape (1928) • Drum (1932) • Ferrite core (1949) • Hard disk (1956) • Stripe card |
|(1956) • MICR (1956) • Thin film (1962) • CRAM (1962) • Twistor (~1968) • Floppy |
|disk (1969) • Bubble (~1970) • MRAM (1995) • Racetrack (2008) |
Magnetic storage uses different patterns of magnetization on a magnetically coated surface to store information. Magnetic storage is non-volatile. The information is accessed using one or more read/write heads which may contain one or more recording transducers. A read/write head only covers a part of the surface so that the head or medium or both must be moved relative to another in order to access data. In modern computers, magnetic storage will take these forms:
• Magnetic disk
o Floppy disk, used for off-line storage
o Hard disk drive, used for secondary storage
• Magnetic tape data storage, used for tertiary and off-line storage
In early computers, magnetic storage was also used for primary storage in a form of magnetic drum, or core memory, core rope memory, thin-film memory, twistor memory or bubble memory. Also unlike today, magnetic tape was often used for secondary storage.
4. 3. Optical
|• [[|d]] • |
| |
|Optical storage media |
| | |
|Compact Disc (1982) : CD-R (1988) · CD-RW (1997) DVD (1995) : DVD-RAM (1996) · DVD-R|
|(1997) · DVD-RW (1999) · DVD+RW (2001) · DVD+R (2002) · DVD+R DL (2004) · DVD-R DL |
|(2005) |
|Other : Microform (1870) · Optical tape (20th century) · Laserdisc (1958) · UDO |
|(2003) · ProData (2003) · UMD (2004) · Blu-ray Disc (2006) · HD DVD (2006) |
|Magneto-optic Kerr effect (1877) : MO disc (1980s) · MiniDisc (1991) |
|Optical Assist : Laser turntable (1986) · Floptical (1991) · Super DLT (1998) |
Optical storage, the typical Optical disc, stores information in deformities on the surface of a circular disc and reads this information by illuminating the surface with a laser diode and observing the reflection. Optical disc storage is non-volatile. The deformities may be permanent (read only media ), formed once (write once media) or reversible (recordable or read/write media). The following forms are currently in common use: [12]
• CD, CD-ROM, DVD, BD-ROM: Read only storage, used for mass distribution of digital information (music, video, computer programs)
• CD-R, DVD-R, DVD+R, BD-R: Write once storage, used for tertiary and off-line storage
• CD-RW, DVD-RW, DVD+RW, DVD-RAM, BD-RE: Slow write, fast read storage, used for tertiary and off-line storage
• Ultra Density Optical or UDO is similar in capacity to BD-R or BD-RE and is slow write, fast read storage used for tertiary and off-line storage.
Magneto-optical disc storage is optical disc storage where the magnetic state on a ferromagnetic surface stores information. The information is read optically and written by combining magnetic and optical methods. Magneto-optical disc storage is non-volatile, sequential access, slow write, fast read storage used for tertiary and off-line storage.
3D optical data storage has also been proposed. 4. 4. Paper
|• [[|d]] • |
| |
|Paper data storage media |
| | |
|Writing on papyrus (c.3000 BCE) · Paper (105 CE) |
| | |
|Punched tape (1846) · Book music (1863) · Ticker tape (1867) · Piano roll (1880s) · Punched card (1890) · Edge-notched card |
|(1896) · Optical mark recognition · Optical character recognition (1929) · Barcode (1948) · Paper disc (2004) |
Paper data storage, typically in the form of paper tape or punched cards, has long been used to store information for automatic processing, particularly before general-purpose computers existed. Information was recorded by punching holes into the paper or cardboard medium and was read mechanically (or later optically) to determine whether a particular location on the medium was solid or contained a hole. A few technologies allow people to make marks on paper that are easily read by machine—these are widely used for tabulating votes and grading standardized tests. Barcodes made it possible for any object that was to be sold or transported to have some computer readable information securely attached to it.
4. 5. Uncommon
Vacuum tube memory
A Williams tube used a cathode ray tube, and a Selectron tube used a large vacuum tube to store information. These primary storage devices were short-lived in the market, since Williams tube was unreliable and Selectron tube was expensive.
Electro-acoustic memory
Delay line memory used sound waves in a substance such as mercury to store information. Delay line memory was dynamic volatile, cycle sequential read/write storage, and was used for primary storage.
Optical tape
is a medium for optical storage generally consisting of a long and narrow strip of plastic onto which patterns can be written and from which the patterns can be read back. It shares some technologies with cinema film stock and optical discs, but is compatible with neither. The motivation behind developing this technology was the possibility of far greater storage capacities than either magnetic tape or optical discs.
Phase-change memory
uses different mechanical phases of Phase Change Material to store information in an X-Y addressable matrix, and reads the information by observing the varying electrical resistance of the material. Phase-change memory would be non-volatile, random access read/write storage, and might be used for primary, secondary and off-line storage. Most rewritable and many write once optical disks already use phase change material to store information.
Holographic data storage
stores information optically inside crystals or photopolymers. Holographic storage can utilize the whole volume of the storage medium, unlike optical disc storage which is limited to a small number of surface layers. Holographic storage would be non-volatile, sequential access, and either write once or read/write storage. It might be used for secondary and off-line storage. See Holographic Versatile Disc (HVD).
Molecular memory
stores information in polymers that can store electric charge. Molecular memory might be especially suited for primary storage. The theoretical storage capacity of molecular memory is 10 terabits per square inch. [13]
5. Related technologies
5. 1. Network connectivity
A secondary or tertiary storage may connect to a computer utilizing computer networks. This concept does not pertain to the primary storage, which is shared between multiple processors in a much lesser degree.
• Direct-attached storage (DAS) is a traditional mass storage, that does not use any network. This is still a most popular approach. This term was coined lately, together with NAS and SAN.
• Network-attached storage (NAS) is mass storage attached to a computer which another computer can access at file level over a local area network, a private wide area network, or in the case of online file storage, over the Internet. NAS is commonly associated with the NFS and CIFS/SMB protocols.
• Storage area network (SAN) is a specialized network, that provides other computers with storage capacity. The crucial difference between NAS and SAN is the former presents and manages file systems to client computers, whilst the latter provides access at block-addressing (raw) level, leaving it to attaching systems to manage data or file systems within the provided capacity. SAN is commonly associated with Fibre Channel networks.
5. 2. Robotic storage
Large quantities of individual magnetic tapes, and optical or magneto-optical discs may be stored in robotic tertiary storage devices. In tape storage field they are known as tape libraries, and in optical storage field optical jukeboxes, or optical disk libraries per analogy. Smallest forms of either technology containing just one drive device are referred to as autoloaders or autochangers.
Robotic-access storage devices may have a number of slots, each holding individual media, and usually one or more picking robots that traverse the slots and load media to built-in drives. The arrangement of the slots and picking devices affects performance. Important characteristics of such storage are possible expansion options: adding slots, modules, drives, robots. Tape libraries may have from 10 to more than 100,000 slots, and provide terabytes or petabytes of near-line information. Optical jukeboxes are somewhat smaller solutions, up to 1,000 slots.
Robotic storage is used for backups, and for high-capacity archives in imaging, medical, and video industries. Hierarchical storage management is a most known archiving strategy of automatically migrating long-unused files from fast hard disk storage to libraries or jukeboxes. If the files are needed, they are retrieved back to disk.
Primary storage topics
Aperture (computer memory)
In computing, an aperture is a portion of the address space which is persistently associated with a particular peripheral device or a memory unit. Apertures may reach external devices such as ROM or RAM chips, or internal memory on the CPU itself.
Typically a memory device attached to a computer accepts addresses starting at zero, and so a system with more than one such device would have ambiguous addressing. To resolve this, the memory logic will contain several aperture selectors, each containing a range selector and an interface to one of the memory devices. The set of selector address ranges of the apertures are disjoint. When the CPU presents a physical address within the range recognized by an aperture, the aperture unit routes the request (with the address remapped to a zero base) to the attached device. Thus apertures form a layer of address translation below the level of the usual virtual-to-physical mapping.
Dynamic random access memory (DRAM) is a type of random access memory that stores each bit of data in a separate capacitor within an integrated circuit. Since real capacitors leak charge, the information eventually fades unless the capacitor charge is refreshed periodically. Because of this refresh requirement, it is a dynamic memory as opposed to SRAM and other static memory.
The main memory (the "RAM") in personal computers is Dynamic RAM (DRAM), as is the "RAM" of home game consoles (Playstation, Xbox 360 and Wii), laptop, notebook and workstation computers.
The advantage of DRAM is its structural simplicity: only one transistor and a capacitor are required per bit, compared to six transistors in SRAM. This allows DRAM to reach very high density. Unlike flash memory, it is volatile memory (cf. non-volatile memory), since it loses its data when the power supply is removed. The transistors and capacitors used are extremely small—millions can fit on a single memory chip.
CAS latency
Column Address Strobe (CAS) latency, or CL, is the delay time between the moment a memory controller tells the memory module to access a particular memory column on a RAM memory module, and the moment the data from given array location is available on the module's output pins. In general, the lower the CAS latency, the better.
In asynchronous DRAM, the interval is specified in nanoseconds. In synchronous DRAM, the interval is specified in clock cycles. Because the latency is dependent upon a number of clock ticks instead of an arbitrary time, the actual time for an SDRAM module to respond to a CAS event might vary between uses of the same module if the clock rate differs.
Contents:
1. RAM operation background
2. Effect on memory access speed
3. See also
4. External links
1. RAM operation background
For more details on this topic, see DRAM—Operation principle.
Dynamic RAM is arranged in a rectangular array. Each row is selected by a horizontal word line. Sending a logical high signal along a given row enables the MOSFETs present in that row, connecting each storage capacitor to its corresponding vertical bit line. Each bit line is connected to a sense amplifier which amplifies the small voltage change produced by the storage capacitor. This amplified signal is then output from the DRAM chip as well as driven back up the bit line to refresh the row.
When no word line is active, the array is idle and the bit lines are held in a precharged state, with a voltage halfway between high and low. This indeterminate signal is deflected towards high or low by the storage capacitor when a row is made active.
When a row address is requested via the word line and the row address strobe (RAS) signal is logical high, the corresponding row is activated and copied into the sense amplifiers within the RAS latency time. After the row is ready, the row is "open," allowing individual columns to be read without need to reselect the row until it needs to be changed. The CAS latency is the delay between the time at which the column address and the column address strobe signal are presented to the memory module and the time at which the corresponding data is made available by the memory module.
As an example, a typical 1 GiB SDRAM memory module might contain eight separate one-gibibit DRAM chips, each offering 128 MiB of storage space. Each chip is divided internally into 8 banks of 227=128 Mibits, each of which comprises a separate DRAM array. Each array contains 214=16384 rows of 213=8192 bits each. One byte of memory (from each chip; 64 bits total from the whole DIMM) is accessed by supplying a 3-bit bank number, a 14-bit row address, and a 10-bit column address.
2. Effect on memory access speed
With asynchronous DRAM, the time delay between presenting a column address and receiving the data on the output pins is constant. Synchronous DRAM, however, has a CAS latency which is dependent upon the clock rate. Accordingly, the CAS latency of an SDRAM memory module is specified in clock ticks instead of real time.
Because memory modules have multiple internal banks, and data can be output from one during access latency for another, the output pins can be kept 100% busy regardless of the CAS latency through pipelining; the maximum attainable bandwidth is determined solely by the clock speed. Unfortunately, this maximum bandwidth can only be attained if the data to be read is known long enough in advance; if the data being accessed is not predictable, pipeline stalls can occur, resulting in a loss of bandwidth. For a completely unknown memory access, the relevant latency is the time to close any open row, plus the time to open the desired row, followed by the CAS latency to read data from it. Due to spatial locality, however, it is common to access several words in the same row. In this case, the CAS latency alone determines the elapsed time.
In general, the lower the CAS latency, the better. Because modern DRAM modules' CAS latencies are specified in clock ticks instead of time, when comparing latencies at different clock speeds, latencies must be translated into actual times to make a fair comparison; a higher numerical CAS latency may still be a shorter real-time latency if the clock is faster. However, it is important to note that the manufacturer-specified CAS latency typically assumes the specified clock rate, so underclocking a memory module may also allow for a lower CAS latency to be set.
Double data rate RAM operates using two transfers per clock cycle. The transfer rate is typically quoted by manufacturers, instead of the clock rate, which is half of the transfer rate for DDR modules. Because the CAS latency is specified in clock cycles, and not transfer ticks (which occur on both the positive and negative edge of the clock), it is important to ensure it is the clock rate which is being used to compute CAS latency times, and not the doubled transfer rate.
Another complicating factor is the use of burst transfers. A modern microprocessor might have a cache line size of 64 bytes, requiring 8 transfers from a 64-bit (8 byte) wide memory to fill. The CAS latency can only accurately measure the time to transfer the first word of memory; the time to transfer all 8 words depends on the RAS latency. Fortunately, the processor typically does not need to wait for all 8 words; the burst is usually sent in critical word first order, and the first critical word can be used by the microprocessor immediately.
In the table below, data rates are given in million transfers—also known as Megatransfers—per second (MT/s), while clock rates are given in MHz, cycles per second.
Memory timing examples (CAS latency only) Type Data rate Bit time Command rate Cycle time CL First word Fourth word Eighth word PC100 100 MT/s 10 ns 100 MHz 10 ns 2 20 ns 50 ns 90 ns PC133 133 MT/s 7.5 ns 133 MHz 7.5 ns 3 22.5 ns 45 ns 75 ns DDR-333 333 MT/s 3 ns 166 MHz 6 ns 2.5 15 ns 24 ns 36 ns DDR-400 400 MT/s 2.5 ns 200 MHz 5 ns 3 15 ns 22.5 ns 32.5 ns 2.5 12.5 ns 20 ns 30 ns 2 10 ns 17.5 ns 27.5 ns DDR2-667 667 MT/s 1.5 ns 333 MHz 3 ns 5 15 ns 19.5 ns 25.5 ns DDR2-800 800 MT/s 1.25 ns 400 MHz 2.5 ns 6 15 ns 18.75 ns 23.75 ns 5 12.5 ns 16.25 ns 21.25 ns DDR3-1066 1066 MT/s 0.9375 ns 533 MHz 1.875 ns 7 13.13 ns 15.95 ns 19.7 ns DDR3-1333 1333 MT/s 0.75 ns 666 MHz 1.5 ns 9 13.5 ns 15.75 ns 18.75 ns 6 9 ns 11.25 ns 14.25 ns DDR3-1375 1375 MT/s 0.73 ns 687 MHz 1.5 ns 5 7.27 ns 9.45 ns 12.36 ns DDR3-1600 1600 MT/s 0.625 ns 800 MHz 1.25 ns 9 11.25 ns 13.125 ns 15.625 ns 8 10 ns 11.875 ns 14.375 ns 7 8.75 ns 10.625 ns 13.125 ns
Mass storage
This article describes mass storage in general. For the USB protocol, see USB mass storage device class.
In computing, mass storage refers to the storage of large amounts of data in a persisting and machine-readable fashion. Storage media for mass storage include hard disks, floppy disks, flash memory, optical discs, magneto-optical discs, magnetic tape, drum memory, punched tape (mostly historic) and holographic memory (experimental). Mass storage includes devices with removable and non-removable media. It does not include random access memory (RAM), which is volatile in that it loses its contents after power loss. The word "mass" is largely semantic; however, the term is used to refer to storage devices of any size (such as USB drives, which tend to have smaller capacities compared to hard disk). [1]
Mass storage devices are characterized by:
• Sustainable transfer speed
• Seek time
• Cost
• Capacity
Today, magnetic disks are the predominant storage media in personal computers. Optical discs, however, are almost exclusively used in the large-scale distribution of retail software, music and movies because of the cost and manufacturing efficiency of the moulding process used to produce DVD and compact discs and the nearly-universal presence of reader drives in personal computers and consumer appliances. [2] Flash memory (in particular, NAND flash) has an established and growing niche in high performance enterprise computing installations, removable storage and on portable devices such as notebook computers and cell phones because of its portability and low power consumption. [3] [4]
The design of computer architectures and operating systems are often dictated by the mass storage and bus technology of their time. [5] Desktop operating systems such as Windows are now so closely tied to the performance characteristics of magnetic disks that it is difficult to deploy them on other media like flash memory without running into space constraints, suffering serious performance problems or breaking applications.
Memory management is the act of managing computer memory. In its simpler forms, this involves providing ways to allocate portions of memory to programs at their request, and freeing it for reuse when no longer needed. The management of main memory is critical to the computer system.
Virtual memory systems separate the memory addresses used by a process from actual physical addresses, allowing separation of processes and increasing the effectively available amount of RAM using disk swapping. The quality of the virtual memory manager can have a big impact on overall system performance.
Garbage collection is the automated allocation, and deallocation of computer memory resources for a program. This is generally implemented at the programming language level and is in opposition to manual memory management, the explicit allocation and deallocation of computer memory resources.
Contents:
1. Features
2. DOS memory managers
3. See also
4. External links
1. Features
Memory management systems on multi-tasking operating systems usually deal with the following issues.
1. 1. Relocation
In systems with virtual memory, programs in memory must be able to reside in different parts of the memory at different times. This is because when the program is swapped back into memory after being swapped out for a while it can not always be placed in the same location. The virtual memory management unit must also deal with concurrency. Memory management in the operating system should therefore be able to relocate programs in memory and handle memory references and addresses in the code of the program so that they always point to the right location in memory.
1. 2. Protection
Main article: Memory protection
Processes should not be able to reference the memory for another process without permission. This is called memory protection, and prevents malicious or malfunctioning code in one program from interfering with the operation of other running programs.
1. 3. Sharing
Main article: Shared memory
Even though the memory for different processes is normally protected from each other, different processes sometimes need to be able to share information and therefore access the same part of memory. Shared memory is one of the fastest techniques for Inter-process communication.
1. 4. Logical organization
Programs are often organized in modules. Some of these modules could be shared between different programs, some are read only and some contain data that can be modified. The memory management is responsible for handling this logical organization that is different from the physical linear address space. One way to arrange this organization is segmentation.
1. 5. Physical organization
Memory is usually divided into fast primary storage and slow secondary storage. Memory management in the operating system handles moving information between these two levels of memory.
2. DOS memory managers
Main article: Memory manager
In addition to standard memory management, the 640 KB barrier of MS-DOS and compatible systems led to the development of programs known as memory managers when PC main memories started to be routinely larger than 640 KB in the late 1980s (see conventional memory). These move portions of the operating system outside their normal locations in order to increase the amount of conventional or quasi-conventional memory available to other applications. Examples are EMM386, which was part of the standard installation in DOS's later versions, and QEMM. These allowed use of memory above the 640 KB barrier, where memory was normally reserved for RAMs, and high and upper memory.
A page address register (PAR) contains the physical addresses of pages currently held in the main memory of a computer system. PARs are used in order to avoid excessive use of an address table in some operating systems. A PAR may check a page's number against all entries in the PAR simultaneously, allowing it to retrieve the pages physical address quickly. A PAR is used by a single process and is only used for pages which are frequently referenced (though these pages may change as the process's behaviour changes in accordance with the principle of locality). An example computer which made use of PARs is the Atlas.
Secondary, tertiary and off-line storage topics
Data proliferation
Data proliferation refers to the prodigious amount of data, structured and unstructured, that businesses and governments continue to generate at an unprecedented rate and the usability problems that result from attempting to store and manage that data. While originally pertaining to problems associated with paper documentation, data proliferation has become a major problem in primary and secondary data storage on computers.
While digital storage has become cheaper, the associated costs, from raw power to maintenance and from metadata to search engines, have not kept up with the proliferation of data. Although the power required to maintain a unit of data has fallen, the cost of facilities which house the digital storage has tended to rise. [1]
Data proliferation has been documented as a problem for the U.S. military since August of 1971, in particular regarding the excessive documentation submitted during the acquisition of major weapon systems. [3] Efforts to mitigate data proliferation and the problems associated with it are ongoing. [4]
Contents:
1. Problems caused by data proliferation
2. Proposed solutions
3. See also
4. References
|“|At the simplest level, company e-mail systems spawn large amounts of data. Business e-mail - some of it important to the |”|
| |enterprise, some much less so - is estimated to be growing at a rate of 25-30% annually. And whether it’s relevant or not, | |
| |the load on the system is being magnified by practices such as multiple addressing and the attaching of large text, audio | |
| |and even video files. | |
|—IBM Global Technology Services [2] |
1. Problems caused by data proliferation
The problem of data proliferation is affecting all areas of commerce as the result of the availability of relatively inexpensive data storage devices. This has made it very easy to dump data into secondary storage immediately after its window of usability has passed. This masks problems that could gravely affect the profitability of businesses and the efficient functioning of health services, police and security forces, local and national governments, and many other types of organization. [2] Data proliferation is problematic for several reasons:
• Difficulty when trying to find and retrieve information. At Xerox, on average it takes employees more than one hour per week to find hard-copy documents, costing $2,152 a year to manage and store them. For businesses with more than 10 employees, this increases to almost two hours per week at $5,760 per year. [5] In large networks of primary and secondary data storage, problems finding electronic data are analogous to problems finding hard copy data.
• Data loss and legal liability when data is disorganized, not properly replicated, or cannot be found in a timely manner. In April 2005, Ameritrade Holding Corporation told 200,000 current and past customers that a tape containing confidential information had been lost or destroyed in transit. In May of the same year, Time Warner Incorporated reported that 40 tapes containing personal data on 600,000 current and former employees had been lost en route to a storage facility. In March 2005, a Florida judge hearing a $2.7 billion lawsuit against Morgan Stanley issued an "adverse inference order" against the company for "willful and gross abuse of its discovery obligations." The judge cited Morgan Stanley for repeatedly finding misplaced tapes of e-mail messages long after the company had claimed that it had turned over all such tapes to the court. [6]
• Increased manpower requirements to manage increasingly chaotic data storage resources.
• Slower networks and application performance due to excess traffic as users search and search again for the material they need. [2]
• High cost in terms of the energy resources required to operate storage hardware. A 100 terabyte system will cost up to $35,040 a year to run—not counting cooling costs. [7]
2. Proposed solutions
• Applications that better utilize modern technology
• Reductions in duplicate data (especially as caused by data movement)
• Improvement of metadata structures
• Improvement of file and storage transfer structures
• User education and discipline [3]
• The implementation of Information Lifecycle Management solutions to eliminate low-value information as early as possible before putting the rest into actively managed long-term storage in which it can be quickly and cheaply accessed. [2]
Data deduplication (1/2)
In computing, data deduplication is a specific form of compression where redundant data is eliminated, typically to improve storage utilization. In the deduplication process, duplicate data is deleted, leaving only one copy of the data to be stored. However, indexing of all data is still retained should that data ever be required. Deduplication is able to reduce the required storage capacity since only the unique data is stored. For example, a typical email system might contain 100 instances of the same one megabyte (MB) file attachment. If the email platform is backed up or archived, all 100 instances are saved, requiring 100 MB storage space. With data deduplication, only one instance of the attachment is actually stored; each subsequent instance is just referenced back to the one saved copy. In this example, a 100 MB storage demand could be reduced to only 1 MB. Different applications have different levels of data redundancy. Backup applications generally benefit the most from de-duplication due to the nature of repeated full backups of an existing file system.
Contents:
1. Benefits
2. Deduplication Overview
3. Drawbacks and concerns
4. Major commercial players and technology
5. See also
6. References
7. External links
1. Benefits
In general, data deduplication improves data protection, increases the speed of service, and reduces costs. The benefits from data de-duplication start with increasing overall data integrity and end with reducing overall data protection costs. Data de-duplication lets users reduce the amount of disk they need for backup by 90 percent or more, and with reduced acquisition costs—and reduced power, space, and cooling requirements—disk becomes suitable for first stage backup and restore and for retention that can easily extend to months. With data on disk, restore service levels are higher, media handling errors are reduced, and more recovery points are available on fast recovery media. It also reduces the data that must be sent across a WAN for remote backups, replication, and disaster recovery.
Data deduplication is a very valuable tool when used with virtual servers, giving you the ability to deduplicate the VMDK files needed for deployment of virtual environments and has the ability to deduplicate snap shots files i.e. VMSN & VMSD in VMWare will give you considerable cost savings compared to the conventional disk backup environment whilst still giving you more recovery points for disaster recovery.
Deduplication can also provide significant energy, space, cooling and costs savings, by reducing the amount of data stored. It contributes significantly in the process of Data Center Transformation through reducing carbon footprints due to savings on storage space and reduces the recurring cost of human resource to management and administration. It also reduces the recycling of the hardware and the budget for data management, backup and retrieval by lowering fixed and recurring cost.
2. Deduplication Overview
2. 1. When Deduplication May Occur
Deduplication may occur "in-line", as data is flowing, or "post-process" after it has been written.
2. 1. 1. Post-process deduplication
With post-process deduplication, new data is first stored on the storage device and then a process at a later time will analyze the data looking for duplication. The benefit is that there is no need to wait for the hash calculations and lookup to be completed before storing the data thereby ensuring that store performance is not degraded. Implementations offering policy-based operation can give users the ability to defer optimization on "active" files, or to process files based on type and location. One potential drawback is that you may unnecessarily store duplicate data for a short time which is an issue if the storage system is near full capacity. Probably the biggest real world issue is the unpredictability of knowing when the process will be completed.
2. 1. 2. In-line deduplication
This is the process where the deduplication hash calculations are created on the target device as the data enters the device in real time. If the device spots a block that it already stored on the system it does not store the new block, just references to the existing block. The benefit of in-line deduplication over post-process deduplication is that it requires less storage as data is not duplicated. On the negative side, it is frequently argued that because hash calculations and lookups takes so long, it can mean that the data ingestion can be slower thereby reducing the backup throughput of the device. However, certain vendors with in-line deduplication have demonstrated equipment with similar performance to their post-process deduplication counterparts.
Post-process and in-line deduplication methods are often heavily debated. [1] [2]
2. 2. Where Deduplication May Occur
Deduplication can occur close to where data is created, which is often referred to as "source deduplication." It can occur close to where the data is stored, which is commonly called "target deduplication."
2. 2. 1. Source versus target deduplication
When describing deduplication for backup architectures, it is common to hear two terms: source deduplication and target deduplication.
Source deduplication ensures that data on the data source is deduplicated. This generally takes place directly within a file-system [3] [4] . The file system will periodically scan new files creating hashes and compare them to hashes of existing files. When files with same hashes are found then the file copy is removed and the new file points to the old file. Unlike hard links however, duplicated files are considered to be separate entities and if one of the duplicated files is later modified, then using a system called Copy on write a copy of that file or changed block is created. The deduplication process is transparent to the users and backup applications. Backing up a deduplicated filesystem will often cause duplication to occur resulting in the backups being bigger than the source data.
Target deduplication is the process of removing duplicates of data in the secondary store. Generally this will be a backup store such as a data repository or a virtual tape library. There are three different ways performing the deduplication process.
2. 3. How Deduplication Occurs
There are many variations employed.
2. 3. 1. Chunking and deduplication overview
Deduplication implementations work by comparing chunks of data to detect duplicates. For that to happen, each chunk of data is assigned a presumably unique identification, calculated by the software, typically using cryptographic hash functions. A requirement of these functions is that if the identification is identical, the data is identical. Therefore, if the software sees that a given identification already exists in the deduplication namespace, then it will replace that duplicate chunk with a link. Upon read back of the file, wherever a link is found, the system simply replaces that link with the referenced data chunk. The de-duplication process is intended to be transparent to end users and applications.
2. 3. 1. 1. Chunking methods
Between commercial deduplication implementations, technology varies primarily in chunking method and in architecture. In some systems, chunks are defined by physical layer constraints (e.g. 4KB block size in WAFL). In some systems only complete files are compared, which is called Single Instance Storage or SIS. The most intelligent (but CPU intensive) method to chunking is generally considered to be sliding-block. In sliding block, a window is passed along the file stream to seek out more naturally occurring internal file boundaries.
2. 3. 2. Client backup deduplication
This is the process where the deduplication hash calculations are initially created on the source (client) machines. Files that have identical hashes to files already in the target device are not sent, the target device just creates appropriate internal links to reference the duplicated data. The benefit of this is that it avoids data being unnecessarily sent across the network thereby reducing traffic load. Backup deduplication needs to be implemented as part of the backup product. [5]
2. 4. Primary storage vs. secondary storage deduplication
Since most data deduplication implementations are slow[citation needed], they are only suitable to work on secondary storage in offline mode. This typically includes the backup process, which can be done in batch offline mode. Most of the post-processing systems fall into this category. Data deduplication for primary storage is the main unsolved problem of deduplication,[citation needed] where the deduplication implementation needs to be fast enough that it can be used in general purpose file systems. Typically this requires to support in-line processing at a minimum, support random IO access, and be able to reclaim deleted data. Primary storage deduplication system might not have as high a compression ratio as the secondary storage approach since speed is the primary concern.
3. Drawbacks and concerns
Whenever data is transformed, concerns arise about potential loss of data. By definition, data deduplication systems store data differently from how it was written. As a result, users are concerned with the integrity of their data. The various methods of deduplicating data all employ slightly different techniques. However, the integrity of the data will ultimately depend upon the design of the deduplicating system, and the quality used to implement the algorithms. As the technology has matured over the past decade, the integrity of most of the major products has been well proven.
One method for deduplicating data relies on the use of cryptographic hash functions to identify duplicate segments of data. If two different pieces of information generate the same hash value, this is known as a collision. The probability of a collision depends upon the hash function used, and although the probabilities are small, they are always non zero.
Thus, the concern arises that data corruption can occur if a hash collision occurs, and additional means of verification are not used to verify whether there is a difference in data, or not. Currently, some vendors do provide additional verification, while others do not. [6]
The hash functions used include standards such as SHA-1, SHA-256 and others. These provide a far lower probability of data loss than the chance of a hardware error in most cases. For most hash values, there is statistically a far greater chance of hardware failure than a hash collision [7] . Both in-line and post-process architectures may offer bit-for-bit validation of original data for guaranteed data integrity.
Some cite the computational resource intensity of the process as a drawback of data deduplication. However, this is rarely an issue for stand-alone devices or appliances, as the computation is completely offloaded from other systems. This can be an issue when the deduplication is embedded within devices providing other services.
To improve performance, a lot of systems utilize weak and strong hashes. Weak hashes are much faster to calculate but there is a greater chance of a hash collision. Systems that utilize weak hashes will subsequently calculate a strong hash and will use it as the determining factor to whether it is actually the same data or not. Note that the system overhead associated with calculating and looking up hash values is primarily a function of the deduplication workflow. The "rehydration" of files does not require this processing and any incremental performance penalty associated with re-assembly of data chunks is unlikely to impact application performance.
Another area of concern with deduplication is the related effect on snapshots, backup, and archival, especially where deduplication is applied against primary storage (for example inside a NAS filer). Reading files out of a storage device causes full rehydration of the files, so any secondary copy of the data set is likely to be larger than the primary copy. In terms of snapshots, if a file is snapshotted prior to de-duplication, the post-deduplication snapshot will preserve the entire original file. This means that although storage capacity for primary file copies will shrink, capacity required for snapshots may expand dramatically.
Another concern is the effect of compression and encryption. Although deduplication is a version of compression, it works in tension with traditional compression. Deduplication achieves better efficiency against smaller data chunks, whereas compression achieves better efficiency against larger chunks. The goal of encryption is to eliminate any discernible patterns in the data. Thus encrypted data will have 0% gain from deduplication, even though the underlying data may be redundant.
Scaling has also been a challenge for dedupe systems because the hash table or dedupe namespace needs to be shared across storage devices. If there are multiple disk backup devices in an infrastructure with discrete dedupe namespaces, then space efficiency is adversely affected. A namespace shared across devices - called Global Dedupe - preserves space efficiency, but is technically challenging from a reliability and performance perspective.
Deduplication ultimately reduces redundancy. If this was not expected and planned for, this may ruin the underlying reliability of the system. (Compare this, for example, to the LOCKSS storage architecture that achieves reliability through multiple copies of data.)
4. Major commercial players and technology
NetApp deduplication for primary-class storages (internal, system-level of Data ONTAP/WAFL), NetApp deduplication for VTL-class storages, ExaGrid's patented byte-level deduplication (content aware), NEC's HydraStor (Content Aware Deduplication Technology), IBM's ProtecTier and IBM Tivoli Storage Manager 6.1, Quantum, EMC/Data Domain, Symantec NetBackup and Symantec Backup Exec 2010 via PureDisk, CommVault, EMC Avamar, Sepaton, Ocarina Networks ECOsystem, FalconStor VTL, CA ARCServe and XOsoft, SIR, FDS (Virtual Tape Library, Single Instance Repository, and File Deduplication System) are some notable names.
The FalconStor VTL Enterprise software architecture provides concurrent overlap backups with data deduplication. [8]
Quantum holds a patent for variable-length block data deduplication.
The Ocarina Networks ECOsystem provides deduplication and compression for primary NAS storage including solutions for application specific datasets.
The ExaGrid architecture provides grid scalability with data deduplication. [9]
Atempo provides Hyperstream a deduplication software that is seemly integrated into Time Navigator. Main features are source deduplication, optional replication and/or mirroring, high availability.
Microsoft Windows Storage Server 2008 includes Single Instance Storage capabilities.
According to an OpenSolaris forum posting by Sun Fellow Jeff Bonwick, Sun Microsystems was scheduled to incorporate deduplication features into ZFS sometime in the summer of 2009. [10] Deduplication was added to ZFS as of early November 2009 and is available in OpenSolaris snv128a and later. [11]
Opendedup is an open-source GPLv2, userspace deduplication project which currently runs on Linux only (although there are plans to port it to MS-Windows as well). [12]
BackupPC is a free open-source de-duplicating system which probably pre-dates all the above. It uses "hard links" in any filesystem which supports them, e.g. Ext3, to store the physical files in a pool (optionally compressed as well) where their names consist of a hash of their size and checksums, while the logical files reside in directory-trees of hard links to those files. The software is Perl scripts, specifically designed to back up multiple desktop PCs and servers into this system. It typically achieves reduction factors of 12x, holding 12 historical copies each of 24 typical Windows desktop PCs in 500 GB.
Data storage tag
A data storage tag (DST), also sometimes known as an archival tag, is a data logger that uses sensors to record data at predetermined intervals. Data storage tags usually have a large memory size and a long battery life.
Contents:
1. Operation
2. See also
3. Notes
1. Operation
Data storage tags can have a variety of sensors; temperature, depth, light, salinity, pressure, pitch and roll, GPS, magnetic and compass. [1] They can be used internally or externally in fish, marine animals [2] or research animals. They are also used in other industries such as the food and beverage industry. [3]
At the end of the monitoring period, the loggers can be connected to a computer and the data uploaded for analysis.
Information repository
For other uses, see Knowledge base.
An information repository is an easy way to deploy secondary tier of data storage that can comprise multiple, networked data storage technologies running on diverse operating systems, where data that no longer needs to be in primary storage is protected, classified according to captured metadata, processed, de-duplicated, and then purged, automatically, based on data service level objectives and requirements. In information repositories, data storage resources are virtualized as composite storage sets and operate as a federated environment.
Information repositories were developed to mitigate problems arising from data proliferation and eliminate the need for separately deployed data storage solutions because of the concurrent deployment of diverse storage technologies running diverse operating systems. They feature centralized management for all deployed data storage resources. They are self-contained, support heterogeneous storage resources, support resource management to add, maintain, recycle, and terminate media, track of off-line media, and operate autonomously.
Contents:
1. Automated Data Management
2. Data Recovery
3. See also
4. References
1. Automated Data Management
Since one of the main reasons for the implementation of an Information repository is to reduce the maintenance workload placed on IT staff by traditional data storage systems, information repositories are automated. Automation is accomplished via polices that can process data based on time, events, data age, and data content. Policies manage the following:
• File system space management
• Irrelevant data elimination (mp3, games, etc.)
• Secondary storage resource management
Data is processed according to media type, storage pool, and storage technology.
Because information repositories are intended to reduce IT staff workload, they are designed to be easy to deploy and offer configuration flexibility, virtually limitless extensibility, redundancy, and reliable failover.
2. Data Recovery
Information repositories feature robust, client based data search and recovery capabilities that, based on permissions, enable end users to search the information repository, view information repository contents, including data on off-line media, and recover individual files or multiple files to either their original network computer or another network computer.
File system (1/3)
For library and office filing systems, see Library classification.
A file system (often also written as filesystem) is a method of storing and organizing computer files and their data. Essentially, it organizes these files into a database for the storage, organization, manipulation, and retrieval by the computer's operating system.
File systems are used on data storage devices such as a hard disks or CD-ROMs to maintain the physical location of the files. Beyond this, they might provide access to data on a file server by acting as clients for a network protocol (e.g., NFS, SMB, or 9P clients), or they may be virtual and exist only as an access method for virtual data (e.g., procfs). It is distinguished from a directory service and registry.
Contents:
1. Aspects of file systems
2. Types of file systems
3. File systems and operating systems
4. See also
5. References
6. Further reading
7. External links
1. Aspects of file systems
Most file systems make use of an underlying data storage device that offers access to an array of fixed-size physical sectors, generally a power of 2 in size (512 bytes or 1, 2, or 4 KiB are most common). The file system is responsible for organizing these sectors into files and directories, and keeping track of which sectors belong to which file and which are not being used. Most file systems address data in fixed-sized units called "clusters" or "blocks" which contain a certain number of disk sectors (usually 1-64). This is the smallest amount of disk space that can be allocated to hold a file.
However, file systems need not make use of a storage device at all. A file system can be used to organize and represent access to any data, whether it's stored or dynamically generated (e.g., procfs).
1. 1. File names
A file name is a name assigned to a file in order to secure storage location in the computer memory. By this file name a file can be further accessed. Whether the file system has an underlying storage device or not, file systems typically have directories which associate file names with files, usually by connecting the file name to an index in a file allocation table of some sort, such as the FAT in a DOS file system, or an inode in a Unix-like file system. Directory structures may be flat, or allow hierarchies where directories may contain subdirectories. In some file systems, file names are structured, with special syntax for filename extensions and version numbers. In others, file names are simple strings, and per-file metadata is stored elsewhere.
1. 2. Metadata
Other bookkeeping information is typically associated with each file within a file system. The length of the data contained in a file may be stored as the number of blocks allocated for the file or as an exact byte count. The time that the file was last modified may be stored as the file's timestamp. Some file systems also store the file creation time, the time it was last accessed, and the time that the file's meta-data was changed. (Note that many early PC operating systems did not keep track of file times.) Other information can include the file's device type (e.g., block, character, socket, subdirectory, etc.), its owner user-ID and group-ID, and its access permission settings (e.g., whether the file is read-only, executable, etc.).
Arbitrary attributes can be associated on advanced file systems, such as NTFS, XFS, ext2/ext3, some versions of UFS, and HFS+, using extended file attributes. This feature is implemented in the kernels of Linux, FreeBSD and Mac OS X operating systems, and allows metadata to be associated with the file at the file system level. This, for example, could be the author of a document, the character encoding of a plain-text document, or a checksum.
1. 3. Hierarchical file systems
The hierarchical file system was an early research interest of Dennis Ritchie of Unix fame; previous implementations were restricted to only a few levels, notably the IBM implementations, even of their early databases like IMS. After the success of Unix, Ritchie extended the file system concept to every object in his later operating system developments, such as Plan 9 and Inferno.
1. 4. Facilities
Traditional file systems offer facilities to create, move and delete both files and directories. They lack facilities to create additional links to a directory (hard links in Unix), rename parent links (".." in Unix-like OS), and create bidirectional links to files.
Traditional file systems also offer facilities to truncate, append to, create, move, delete and in-place modify files. They do not offer facilities to prepend to or truncate from the beginning of a file, let alone arbitrary insertion into or deletion from a file. The operations provided are highly asymmetric and lack the generality to be useful in unexpected contexts. For example, interprocess pipes in Unix have to be implemented outside of the file system because the pipes concept does not offer truncation from the beginning of files.
1. 5. Secure access
See also: Secure computing
Secure access to basic file system operations can be based on a scheme of access control lists or capabilities. Research has shown access control lists to be difficult to secure properly, which is why research operating systems tend to use capabilities.[citation needed] Commercial file systems still use access control lists.
2. Types of file systems
File system types can be classified into disk file systems, network file systems and special purpose file systems.
2. 1. Disk file systems
A disk file system is a file system designed for the storage of files on a data storage device, most commonly a disk drive, which might be directly or indirectly connected to the computer. Examples of disk file systems include FAT (FAT12, FAT16, FAT32, exFAT), NTFS, HFS and HFS+, HPFS, UFS, ext2, ext3, ext4, btrfs, ISO 9660, ODS-5, Veritas File System, ZFS and UDF. Some disk file systems are journaling file systems or versioning file systems.
ISO 9660 and Universal Disk Format are the two most common formats that target Compact Discs and DVDs. Mount Rainier is a newer extension to UDF supported by Linux 2.6 series and Windows Vista that facilitates rewriting to DVDs in the same fashion as has been possible with floppy disks.
2. 2. Flash file systems
Main article: Flash file system
A flash file system is a file system designed for storing files on flash memory devices. These are becoming more prevalent as the number of mobile devices is increasing, and the capacity of flash memories increase.
While a disk file system can be used on a flash device, this is suboptimal for several reasons:
• Erasing blocks: Flash memory blocks have to be explicitly erased before they can be rewritten. The time taken to erase blocks can be significant, thus it is beneficial to erase unused blocks while the device is idle.
• Random access: Disk file systems are optimized to avoid disk seeks whenever possible, due to the high cost of seeking. Flash memory devices impose no seek latency.
• Wear levelling: Flash memory devices tend to wear out when a single block is repeatedly overwritten; flash file systems are designed to spread out writes evenly.
Log-structured file systems have many of the desirable properties for a flash file system. Such file systems include JFFS2 and YAFFS.
2. 3. Tape file systems
A tape file system is a file system and tape format designed to store files on tape in a self-describing form. Magnetic tapes are sequential storage media, posing challenges to the creation and efficient management of a general-purpose file system. IBM has recently announced and made available as open source a new file system for tape called LTFS "Linear Tape File System" or "Long Term File System". LTFS allows to directly create files on tape and work with them like working with files on a disk drive.
2. 4. Database file systems
A new concept for file management is the concept of a database-based file system. Instead of, or in addition to, hierarchical structured management, files are identified by their characteristics, like type of file, topic, author, or similar metadata.
2. 5. Transactional file systems
Each disk operation may involve changes to a number of different files and disk structures. In many cases, these changes are related, meaning that it is important that they all be executed at the same time. For example, in the case of a bank sending money to another bank electronically, the bank's computer will "send" the transfer instruction to the other bank and also update its own records to indicate the transfer has occurred. If for some reason the computer crashes before it has had a chance to update its own records, then on reset, there will be no record of the transfer but the bank will be missing some money.
Transaction processing introduces the guarantee that at any point while it is running, a transaction can either be finished completely or reverted completely (though not necessarily both at any given point). This means that if there is a crash or power failure, after recovery, the stored state will be consistent. (Either the money will be transferred or it will not be transferred, but it won't ever go missing "in transit".)
This type of file system is designed to be fault tolerant, but may incur additional overhead to do so.
Journaling file systems are one technique used to introduce transaction-level consistency to file system structures.
2. 6. Network file systems
Main article: Network file system
A network file system is a file system that acts as a client for a remote file access protocol, providing access to files on a server. Examples of network file systems include clients for the NFS, AFS, SMB protocols, and file-system-like clients for FTP and WebDAV.
2. 7. Shared disk file systems
Main article: Shared disk file system
A shared disk file system is one in which a number of machines (usually servers) all have access to the same external disk subsystem (usually a SAN). The file system arbitrates access to that subsystem, preventing write collisions. Examples include GFS from Red Hat, GPFS from IBM, and SFS from DataPlow.
2. 8. Special purpose file systems
Main article: Special file system
A special purpose file system is basically any file system that is not a disk file system or network file system. This includes systems where the files are arranged dynamically by software, intended for such purposes as communication between computer processes or temporary file space.
Special purpose file systems are most commonly used by file-centric operating systems such as Unix. Examples include the procfs (/proc) file system used by some Unix variants, which grants access to information about processes and other operating system features.
Deep space science exploration craft, like Voyager I & II used digital tape-based special file systems. Most modern space exploration craft like Cassini-Huygens used Real-time operating system file systems or RTOS influenced file systems. The Mars Rovers are one such example of an RTOS file system, important in this case because they are implemented in flash memory.
3. File systems and operating systems
Most operating systems provide a file system, as a file system is an integral part of any modern operating system. Early microcomputer operating systems' only real task was file management — a fact reflected in their names (see DOS). Some early operating systems had a separate component for handling file systems which was called a disk operating system. On some microcomputers, the disk operating system was loaded separately from the rest of the operating system. On early operating systems, there was usually support for only one, native, unnamed file system; for example, CP/M supports only its own file system, which might be called "CP/M file system" if needed, but which didn't bear any official name at all.
Because of this, there needs to be an interface provided by the operating system software between the user and the file system. This interface can be textual (such as provided by a command line interface, such as the Unix shell, or OpenVMS DCL) or graphical (such as provided by a graphical user interface, such as file browsers). If graphical, the metaphor of the folder, containing documents, other files, and nested folders is often used (see also: directory and folder).
3. 1. Flat file systems
In a flat file system, there are no subdirectories—everything is stored at the same (root) level on the media, be it a hard disk, floppy disk, etc. While simple, this system rapidly becomes inefficient as the number of files grows, and makes it difficult for users to organize data into related groups.
Like many small systems before it, the original Apple Macintosh featured a flat file system, called Macintosh File System. Its version of Mac OS was unusual in that the file management software (Macintosh Finder) created the illusion of a partially hierarchical filing system on top of MFS. This structure meant that every file on a disk had to have a unique name, even if it appeared to be in a separate folder. MFS was quickly replaced with Hierarchical File System, which supported real directories.
A recent addition to the flat file system family is Amazon's S3, a remote storage service, which is intentionally simplistic to allow users the ability to customize how their data is stored. The only constructs are buckets (imagine a disk drive of unlimited size) and objects (similar, but not identical to the standard concept of a file). Advanced file management is allowed by being able to use nearly any character (including '/') in the object's name, and the ability to select subsets of the bucket's content based on identical prefixes.
3. 2. File systems under Unix-like operating systems
Unix-like operating systems create a virtual file system, which makes all the files on all the devices appear to exist in a single hierarchy. This means, in those systems, there is one root directory, and every file existing on the system is located under it somewhere. Unix-like systems can use a RAM disk or network shared resource as its root directory.
Unix-like systems assign a device name to each device, but this is not how the files on that device are accessed. Instead, to gain access to files on another device, the operating system must first be informed where in the directory tree those files should appear. This process is called mounting a file system. For example, to access the files on a CD-ROM, one must tell the operating system "Take the file system from this CD-ROM and make it appear under such-and-such directory". The directory given to the operating system is called the mount point - it might, for example, be /media. The /media directory exists on many Unix systems (as specified in the Filesystem Hierarchy Standard) and is intended specifically for use as a mount point for removable media such as CDs, DVDs, USB drives or floppy disks. It may be empty, or it may contain subdirectories for mounting individual devices. Generally, only the administrator (i.e. root user) may authorize the mounting of file systems.
Unix-like operating systems often include software and tools that assist in the mounting process and provide it new functionality. Some of these strategies have been coined "auto-mounting" as a reflection of their purpose.
1. In many situations, file systems other than the root need to be available as soon as the operating system has booted. All Unix-like systems therefore provide a facility for mounting file systems at boot time. System administrators define these file systems in the configuration file fstab or vfstab in Solaris Operating Environment, which also indicates options and mount points.
2. In some situations, there is no need to mount certain file systems at boot time, although their use may be desired thereafter. There are some utilities for Unix-like systems that allow the mounting of predefined file systems upon demand.
3. Removable media have become very common with microcomputer platforms. They allow programs and data to be transferred between machines without a physical connection. Common examples include USB flash drives, CD-ROMs, and DVDs. Utilities have therefore been developed to detect the presence and availability of a medium and then mount that medium without any user intervention.
4. Progressive Unix-like systems have also introduced a concept called supermounting; see, for example, the Linux supermount-ng project. For example, a floppy disk that has been supermounted can be physically removed from the system. Under normal circumstances, the disk should have been synchronized and then unmounted before its removal. Provided synchronization has occurred, a different disk can be inserted into the drive. The system automatically notices that the disk has changed and updates the mount point contents to reflect the new medium. Similar functionality is found on Windows machines.
5. A similar innovation preferred by some users is the use of autofs, a system that, like supermounting, eliminates the need for manual mounting commands. The difference from supermount, other than compatibility in an apparent greater range of applications such as access to file systems on network servers, is that devices are mounted transparently when requests to their file systems are made, as would be appropriate for file systems on network servers, rather than relying on events such as the insertion of media, as would be appropriate for removable media.
3. 2. 1. File systems under Linux
Linux supports many different file systems, but common choices for the system disk include the ext* family (such as ext2, ext3 and ext4), XFS, JFS, ReiserFS and btrfs.
3. 2. 2. File systems under Solaris
The Sun Microsystems Solaris operating system in earlier releases defaulted to (non-journaled or non-logging) UFS for bootable and supplementary file systems. Solaris defaulted to, supported, and extended UFS.
Support for other file systems and significant enhancements were added over time, including Veritas Software Corp. (Journaling) VxFS, Sun Microsystems (Clustering) QFS, Sun Microsystems (Journaling) UFS, and Sun Microsystems (open source, poolable, 128 bit compressible, and error-correcting) ZFS.
Kernel extensions were added to Solaris to allow for bootable Veritas VxFS operation. Logging or Journaling was added to UFS in Sun's Solaris 7. Releases of Solaris 10, Solaris Express, OpenSolaris, and other open source variants of the Solaris operating system later supported bootable ZFS.
Logical Volume Management allows for spanning a file system across multiple devices for the purpose of adding redundancy, capacity, and/or throughput. Legacy environments in Solaris may use Solaris Volume Manager (formerly known as Solstice DiskSuite.) Multiple operating systems (including Solaris) may use Veritas Volume Manager. Modern Solaris based operating systems eclipse the need for Volume Management through leveraging virtual storage pools in ZFS.
3. 2. 3. File systems under Mac OS X
Mac OS X uses a file system that it inherited from classic Mac OS called HFS Plus, sometimes called Mac OS Extended. HFS Plus is a metadata-rich and case preserving file system. Due to the Unix roots of Mac OS X, Unix permissions were added to HFS Plus. Later versions of HFS Plus added journaling to prevent corruption of the file system structure and introduced a number of optimizations to the allocation algorithms in an attempt to defragment files automatically without requiring an external defragmenter.
Filenames can be up to 255 characters. HFS Plus uses Unicode to store filenames. On Mac OS X, the filetype can come from the type code, stored in file's metadata, or the filename.
HFS Plus has three kinds of links: Unix-style hard links, Unix-style symbolic links and aliases. Aliases are designed to maintain a link to their original file even if they are moved or renamed; they are not interpreted by the file system itself, but by the File Manager code in userland.
Mac OS X also supports the UFS file system, derived from the BSD Unix Fast File System via NeXTSTEP. However, as of Mac OS X 10.5 (Leopard), Mac OS X can no longer be installed on a UFS volume, nor can a pre-Leopard system installed on a UFS volume be upgraded to Leopard. [1]
3. 3. File systems under Plan 9 from Bell Labs
Plan 9 from Bell Labs was originally designed to extend some of Unix's good points, and to introduce some new ideas of its own while fixing the shortcomings of Unix.
With respect to file systems, the Unix system of treating things as files was continued, but in Plan 9, everything is treated as a file, and accessed as a file would be (i.e., no ioctl or mmap). Perhaps surprisingly, while the file interface is made universal it is also simplified considerably: symlinks, hard links and suid are made obsolete, and an atomic create/open operation is introduced. More importantly the set of file operations becomes well defined and subversions of this like ioctl are eliminated.
Secondly, the underlying 9P protocol was used to remove the difference between local and remote files (except for a possible difference in latency or in throughput). This has the advantage that a device or devices, represented by files, on a remote computer could be used as though it were the local computer's own device(s). This means that under Plan 9, multiple file servers provide access to devices, classing them as file systems. Servers for "synthetic" file systems can also run in user space bringing many of the advantages of micro kernel systems while maintaining the simplicity of the system.
Everything on a Plan 9 system has an abstraction as a file; networking, graphics, debugging, authentication, capabilities, encryption, and other services are accessed via I-O operations on file descriptors. For example, this allows the use of the IP stack of a gateway machine without need of NAT, or provides a network-transparent window system without the need of any extra code.
Another example: a Plan-9 application receives FTP service by opening an FTP site. The ftpfs server handles the open by essentially mounting the remote FTP site as part of the local file system. With ftpfs as an intermediary, the application can now use the usual file-system operations to access the FTP site as if it were part of the local file system. A further example is the mail system which uses file servers that synthesize virtual files and directories to represent a user mailbox as /mail/fs/mbox. The wikifs provides a file system interface to a wiki.
These file systems are organized with the help of private, per-process namespaces, allowing each process to have a different view of the many file systems that provide resources in a distributed system.
The Inferno operating system shares these concepts with Plan 9.
3. 4. File systems under Microsoft Windows
[pic]
Directory listing in a Windows command shell
Windows makes use of the FAT and NTFS file systems.
3. 4. 1. FAT
The File Allocation Table (FAT) filing system, supported by all versions of Microsoft Windows, was an evolution of that used in Microsoft's earlier operating system (MS-DOS which in turn was based on 86-DOS). FAT ultimately traces its roots back to the short-lived M-DOS project and Standalone disk BASIC before it. Over the years various features have been added to it, inspired by similar features found on file systems used by operating systems such as Unix.
Older versions of the FAT file system (FAT12 and FAT16) had file name length limits, a limit on the number of entries in the root directory of the file system and had restrictions on the maximum size of FAT-formatted disks or partitions. Specifically, FAT12 and FAT16 had a limit of 8 characters for the file name, and 3 characters for the extension (such as .exe). This is commonly referred to as the 8.3 filename limit. VFAT, which was an extension to FAT12 and FAT16 introduced in Windows NT 3.5 and subsequently included in Windows 95, allowed long file names (LFN).
FAT32 also addressed many of the limits in FAT12 and FAT16, but remains limited compared to NTFS.
exFAT (also known as FAT64) is the newest iteration of FAT, with certain advantages over NTFS with regards to file system overhead. But unlike prior versions of FAT, exFAT is only compatible with newer Windows systems, such as Windows 2003 and Windows 7.
3. 4. 2. NTFS
NTFS, introduced with the Windows NT operating system, allowed ACL-based permission control. Hard links, multiple file streams, attribute indexing, quota tracking, sparse files, encryption, compression, reparse points (directories working as mount-points for other file systems, symlinks, junctions, remote storage links) are also supported, though not all these features are well-documented.
Unlike many other operating systems, Windows uses a drive letter abstraction at the user level to distinguish one disk or partition from another. For example, the path C:\WINDOWS represents a directory WINDOWS on the partition represented by the letter C. The C drive is most commonly used for the primary hard disk partition, on which Windows is usually installed and from which it boots. This "tradition" has become so firmly ingrained that bugs came about in older applications which made assumptions that the drive that the operating system was installed on was C. The tradition of using "C" for the drive letter can be traced to MS-DOS, where the letters A and B were reserved for up to two floppy disk drives. This in turn derived from CP/M in the 1970s, which however used A: and B: for hard drives, and C: for floppy disks, and ultimately from IBM's CP/CMS of 1967.
Flash memory (1/3)
[pic]
Not to be confused with USB flash drive or Memory card.
Flash memory is a non-volatile computer storage technology that can be electrically erased and reprogrammed. It is primarily used in memory cards and USB flash drives for general storage and transfer of data between computers and other digital products. It is a specific type of EEPROM (electrically-erasable programmable read-only memory) that is erased and programmed in large blocks; in early flash the entire chip had to be erased at once. Flash memory costs far less than byte-programmable EEPROM and therefore has become the dominant technology wherever a significant amount of non-volatile, solid state storage is needed. Example applications include PDAs (personal digital assistants), laptop computers, digital audio players, digital cameras and mobile phones. It has also gained popularity in console video game hardware, where it is often used instead of EEPROMs or battery-powered static RAM (SRAM) for game save data.
[pic]
A USB flash drive. The chip on the left is the flash memory. The microcontroller is on the right.
Since flash memory is non-volatile, no power is needed to maintain the information stored in the chip. In addition, flash memory offers fast read access times (although not as fast as volatile DRAM memory used for main memory in PCs) and better kinetic shock resistance than hard disks. These characteristics explain the popularity of flash memory in portable devices. Another feature of flash memory is that when packaged in a "memory card," it is extremely durable, being able to withstand intense pressure, extremes of temperature, and even immersion in water.
Although technically a type of EEPROM, the term "EEPROM" is generally used to refer specifically to non-flash EEPROM which is erasable in small blocks, typically bytes. Because erase cycles are slow, the large block sizes used in flash memory erasing give it a significant speed advantage over old-style EEPROM when writing large amounts of data.[citation needed]
Contents:
1. History
2. Principles of operation
3. Limitations
4. Low-level access
5. Distinction between NOR and NAND flash
6. Flash file systems
7. Capacity
8. Transfer rates
9. Applications
10. Industry
11. Flash scalability
12. See also
13. References
14. External links
|Computer memory types |
|Volatile |
|DRAM, e.g. DDR SDRAM |
|SRAM |
|Upcoming |
|T-RAM |
|Z-RAM |
|TTRAM |
|Historical |
|Delay line memory |
|Selectron tube |
|Williams tube |
|Non-volatile |
|ROM |
|PROM |
|EPROM |
|EEPROM |
|Flash memory |
|Upcoming |
|FeRAM |
|MRAM |
|CBRAM |
|PRAM |
|SONOS |
|RRAM |
|Racetrack memory |
|NRAM |
|Millipede |
|Historical |
|Drum memory |
|Magnetic core memory |
|Plated wire memory |
|Bubble memory |
|Twistor memory |
1. History
Flash memory (both NOR and NAND types) was invented by Dr. Fujio Masuoka while working for Toshiba circa 1980. [1] [2] According to Toshiba, the name "flash" was suggested by Dr. Masuoka's colleague, Mr. Shoji Ariizumi, because the erasure process of the memory contents reminded him of the flash of a camera. Dr. Masuoka presented the invention at the IEEE 1984 International Electron Devices Meeting (IEDM) held in San Francisco, California.
Intel Corporation saw the massive potential of the invention and introduced the first commercial NOR type flash chip in 1988. [3] NOR-based flash has long erase and write times, but provides full address and data buses, allowing random access to any memory location. This makes it a suitable replacement for older read-only memory (ROM) chips, which are used to store program code that rarely needs to be updated, such as a computer's BIOS or the firmware of set-top boxes. Its endurance is 10,000 to 1,000,000 erase cycles. [4] NOR-based flash was the basis of early flash-based removable media; CompactFlash was originally based on it, though later cards moved to less expensive NAND flash.
Toshiba announced NAND flash at the 1987 International Electron Devices Meeting. It has reduced erase and write times, and requires less chip area per cell, thus allowing greater storage density and lower cost per bit than NOR flash; it also has up to ten times the endurance of NOR flash. However, the I/O interface of NAND flash does not provide a random-access external address bus. Rather, data must be read on a block-wise basis, with typical block sizes of hundreds to thousands of bits. This made NAND flash unsuitable as a drop-in replacement for program ROM since most microprocessors and microcontrollers required byte-level random access. In this regard NAND flash is similar to other secondary storage devices such as hard disks and optical media, and is thus very suitable for use in mass-storage devices such as memory cards. The first NAND-based removable media format was SmartMedia in 1995, and many others have followed, including MultiMediaCard, Secure Digital, Memory Stick and xD-Picture Card. A new generation of memory card formats, including RS-MMC, miniSD and microSD, and Intelligent Stick, feature extremely small form factors. For example, the microSD card has an area of just over 1.5 cm2, with a thickness of less than 1 mm; microSD capacities range from 64 MB to 32 GB, as of March 2010. [5]
2. Principles of operation
[pic]
A flash memory cell.
Flash memory stores information in an array of memory cells made from floating-gate transistors. In traditional single-level cell (SLC) devices, each cell stores only one bit of information. Some newer flash memory, known as multi-level cell (MLC) devices, can store more than one bit per cell by choosing between multiple levels of electrical charge to apply to the floating gates of its cells.
The floating gate may be conductive (typically polysilicon in most kinds of flash memory) or non-conductive (as in SONOS flash memory). [6]
2. 1. Floating-gate transistor
In flash memory, each memory cell resembles a standard MOSFET, except the transistor has two gates instead of one. On top is the control gate (CG), as in other MOS transistors, but below this there is a floating gate (FG) insulated all around by an oxide layer. The FG is interposed between the CG and the MOSFET channel. Because the FG is electrically isolated by its insulating layer, any electrons placed on it are trapped there and, under normal conditions, will not discharge for many years. When the FG holds a charge, it screens (partially cancels) the electric field from the CG, which modifies the threshold voltage (VT) of the cell. During read-out, a voltage intermediate between the possible threshold voltages is applied to the CG, and the MOSFET channel will become conducting or remain insulating, depending on the VT of the cell, which is in turn controlled by charge on the FG. The current flow through the MOSFET channel is sensed and forms a binary code, reproducing the stored data. In a multi-level cell device, which stores more than one bit per cell, the amount of current flow is sensed (rather than simply its presence or absence), in order to determine more precisely the level of charge on the FG.
2. 2. NOR flash
[pic]
NOR flash memory wiring and structure on silicon
In NOR gate flash, each cell has one end connected directly to ground, and the other end connected directly to a bit line.
This arrangement is called "NOR flash" because it acts like a NOR gate: when one of the word lines is brought high, the corresponding storage transistor acts to pull the output bit line low.
2. 2. 1. Programming
[pic]
Programming a NOR memory cell (setting it to logical 0), via hot-electron injection.
A single-level NOR flash cell in its default state is logically equivalent to a binary "1" value, because current will flow through the channel under application of an appropriate voltage to the control gate. A NOR flash cell can be programmed, or set to a binary "0" value, by the following procedure:
• an elevated on-voltage (typically >5 V) is applied to the CG
• the channel is now turned on, so electrons can flow from the source to the drain (assuming an NMOS transistor)
• the source-drain current is sufficiently high to cause some high energy electrons to jump through the insulating layer onto the FG, via a process called hot-electron injection
2. 2. 2. Erasing
[pic]
Erasing a NOR memory cell (setting it to logical 1), via quantum tunneling.
To erase a NOR flash cell (resetting it to the "1" state), a large voltage of the opposite polarity is applied between the CG and source, pulling the electrons off the FG through quantum tunneling. Modern NOR flash memory chips are divided into erase segments (often called blocks or sectors). The erase operation can only be performed on a block-wise basis; all the cells in an erase segment must be erased together. Programming of NOR cells, however, can generally be performed one byte or word at a time.
2. 2. 3. Internal charge pumps
Despite the need for high programming and erasing voltages, virtually all flash chips today require only a single supply voltage, and produce the high voltages via on-chip charge pumps.
2. 3. NAND flash
[pic]
NAND flash memory wiring and structure on silicon
NAND flash also uses floating-gate transistors, but they are connected in a way that resembles a NAND gate: several transistors are connected in series, and only if all word lines are pulled high (above the transistors' VT) is the bit line pulled low. These groups are then connected via some additional transistors to a NOR-style bit line array.
To read, most of the word lines are pulled up above the VT of a programmed bit, while one of them is pulled up to just over the VT of an erased bit. The series group will conduct (and pull the bit line low) if the selected bit has not been programmed.
Despite the additional transistors, the reduction in ground wires and bit lines allows a denser layout and greater storage capacity per chip. In addition, NAND flash is typically permitted to contain a certain number of faults (NOR flash, as is used for a BIOS ROM, is expected to be fault-free). Manufacturers try to maximize the amount of usable storage by shrinking the size of the transistor below the size where they can be made reliably, to the size where further reductions would increase the number of faults faster than it would increase the total storage available.
NAND flash uses tunnel injection for writing and tunnel release for erasing. NAND flash memory forms the core of the removable USB storage devices known as USB flash drives and most memory card formats available today.
3. Limitations
3. 1. Block erasure
One limitation of flash memory is that although it can be read or programmed a byte or a word at a time in a random access fashion, it must be erased a "block" at a time. This generally sets all bits in the block to 1. Starting with a freshly erased block, any location within that block can be programmed. However, once a bit has been set to 0, only by erasing the entire block can it be changed back to 1. In other words, flash memory (specifically NOR flash) offers random-access read and programming operations, but cannot offer arbitrary random-access rewrite or erase operations. A location can, however, be rewritten as long as the new value's 0 bits are a superset of the over-written value's. For example, a nibble value may be erased to 1111, then written as 1110. Successive writes to that nibble can change it to 1010, then 0010, and finally 0000. Essentially, erasure sets (all) bits, and programming can only clear bits. Filesystems designed for flash devices can make use of this capability to represent sector metadata.
Although data structures in flash memory cannot be updated in completely general ways, this allows members to be "removed" by marking them as invalid. This technique may need to be modified for Multi-level Cell devices, where one memory cell holds more than one bit.
Unfortunately common flash devices such as USB keys and memory cards provide only a block-level interface, or FTL (Flash Translation Layer), which writes to a different cell each time to wear-level the device. This prevents incremental writing within a block, however it does help the device from being prematurely worn out by abusive and/or poorly designed hardware/software. For example, nearly all consumer devices ship formatted with MS-FAT filesystem, which predates flash memory, having been designed for DOS, and disk media.
3. 2. Memory wear
Another limitation is that flash memory has a finite number of erase-write cycles. Most commercially available flash products are guaranteed to withstand around 100,000 write-erase cycles, before the wear begins to deteriorate the integrity of the storage. [7] Micron Technology and Sun Microsystems announced an SLC flash memory chip rated for 1,000,000 write-erase-cycles on December 17, 2008. [8]
The guaranteed cycle count may apply only to block zero (as is the case with TSOP NAND parts), or to all blocks (as in NOR). This effect is partially offset in some chip firmware or file system drivers by counting the writes and dynamically remapping blocks in order to spread write operations between sectors; this technique is called wear levelling. Another approach is to perform write verification and remapping to spare sectors in case of write failure, a technique called Bad Block Management (BBM). For portable consumer devices, these wearout management techniques typically extend the life of the flash memory beyond the life of the device itself, and some data loss may be acceptable in these applications. For high reliability data storage, however, it is not advisable to use flash memory that would have to go through a large number of programming cycles. This limitation is meaningless for 'read-only' applications such as thin clients and routers, which are only programmed once or at most a few times during their lifetimes.
4. Low-level access
The low-level interface to flash memory chips differs from those of other memory types such as DRAM, ROM, and EEPROM, which support bit-alterability (both zero to one and one to zero) and random-access via externally accessible address buses.
While NOR memory provides an external address bus for read and program operations (and thus supports random-access); unlocking and erasing NOR memory must proceed on a block-by-block basis. With NAND flash memory, read and programming operations must be performed page-at-a-time while unlocking and erasing must happen in block-wise fashion.
4. 1. NOR memories
Reading from NOR flash is similar to reading from random-access memory, provided the address and data bus are mapped correctly. Because of this, most microprocessors can use NOR flash memory as execute in place (XIP) memory, meaning that programs stored in NOR flash can be executed directly from the NOR flash without needing to be copied into RAM first. NOR flash may be programmed in a random-access manner similar to reading. Programming changes bits from a logical one to a zero. Bits that are already zero are left unchanged. Erasure must happen a block at a time, and resets all the bits in the erased block back to one. Typical block sizes are 64, 128, or 256 Kilobytes.
Bad block management is a relatively new feature in NOR chips. In older NOR devices not supporting bad block management, the software or device driver controlling the memory chip must correct for blocks that wear out, or the device will cease to work reliably.
The specific commands used to lock, unlock, program, or erase NOR memories differ for each manufacturer. To avoid needing unique driver software for every device made, a special set of CFI commands allow the device to identify itself and its critical operating parameters.
Apart from being used as random-access ROM, NOR memories can also be used as storage devices by taking advantage of random-access programming. Some devices offer read-while-write functionality so that code continues to execute even while a program or erase operation is occurring in the background. For sequential data writes, NOR flash chips typically have slow write speeds compared with NAND flash.
4. 2. NAND memories
NAND flash architecture was introduced by Toshiba in 1989. These memories are accessed much like block devices such as hard disks or memory cards. Each block consists of a number of pages. The pages are typically 512 [9] or 2,048 or 4,096 bytes in size. Associated with each page are a few bytes (typically 1/32 of the data size) that can be used for storage of an error correcting code (ECC) checksum.
Typical block sizes include:
• 32 pages of 512+16 bytes each for a block size of 16 KB
• 64 pages of 2,048+64 bytes each for a block size of 128 KB [10]
• 64 pages of 4,096+128 bytes each for a block size of 256 KB [11]
• 128 pages of 4,096+128 bytes each for a block size of 512 KB.
While reading and programming is performed on a page basis, erasure can only be performed on a block basis. Another limitation of NAND flash is data in a block can only be written sequentially. Number of Operations (NOPs) is the number of times the sectors can be programmed. So far this number for MLC flash is always one whereas for SLC flash it is four.[citation needed]
NAND devices also require bad block management by the device driver software, or by a separate controller chip. SD cards, for example, include controller circuitry to perform bad block management and wear leveling. When a logical block is accessed by high-level software, it is mapped to a physical block by the device driver or controller. A number of blocks on the flash chip may be set aside for storing mapping tables to deal with bad blocks, or the system may simply check each block at power-up to create a bad block map in RAM. The overall memory capacity gradually shrinks as more blocks are marked as bad.
NAND relies on ECC to compensate for bits that may spontaneously fail during normal device operation. A typical ECC will correct a one bit error in each 2048 bits (256 bytes) using 22 bits of ECC code, or a one bit error in each 4096 bits (512 bytes) using 24 bits of ECC code. [12] If ECC cannot correct the error during read, it may still detect the error. When doing erase or program operations, the device can detect blocks that fail to program or erase and mark them bad. The data is then written to a different, good block, and the bad block map is updated.
Most NAND devices are shipped from the factory with some bad blocks which are typically identified and marked according to a specified bad block marking strategy. By allowing some bad blocks, the manufacturers achieve far higher yields than would be possible if all blocks had to be verified good. This significantly reduces NAND flash costs and only slightly decreases the storage capacity of the parts.
When executing software from NAND memories, virtual memory strategies are often used: memory contents must first be paged or copied into memory-mapped RAM and executed there (leading to the common combination of NAND + RAM). A memory management unit (MMU) in the system is helpful, but this can also be accomplished with overlays. For this reason, some systems will use a combination of NOR and NAND memories, where a smaller NOR memory is used as software ROM and a larger NAND memory is partitioned with a file system for use as a non-volatile data storage area.
NAND is best suited to systems requiring high capacity data storage. This type of flash architecture offers higher densities and larger capacities at lower cost with faster erase, sequential write, and sequential read speeds, sacrificing the random-access and execute in place advantage of the NOR architecture.
4. 3. Standardization
A group called the Open NAND Flash Interface Working Group (ONFI) has developed a standardized low-level interface for NAND flash chips. This allows interoperability between conforming NAND devices from different vendors. The ONFI specification version 1.0 [13] was released on December 28, 2006. It specifies:
• a standard physical interface (pinout) for NAND flash in TSOP-48, WSOP-48, LGA-52, and BGA-63 packages
• a standard command set for reading, writing, and erasing NAND flash chips
• a mechanism for self-identification (comparable to the Serial Presence Detection feature of SDRAM memory modules)
The ONFI group is supported by major NAND Flash manufacturers, including Hynix, Intel, Micron Technology, and Numonyx, as well as by major manufacturers of devices incorporating NAND flash chips. [14]
A group of vendors, including Intel, Dell, and Microsoft formed a Non-Volatile Memory Host Controller Interface (NVMHCI) Working Group. [15] The goal of the group is to provide standard software and hardware programming interfaces for nonvolatile memory subsystems, including the "flash cache" device connected to the PCI Express bus.
5. Distinction between NOR and NAND flash
NOR and NAND flash differ in two important ways:
• the connections of the individual memory cells are different
• the interface provided for reading and writing the memory is different (NOR allows random-access for reading, NAND allows only page access)
These two are linked by the design choices made in the development of NAND flash. A goal of NAND flash development was to reduce the chip area required to implement a given capacity of flash memory, and thereby to reduce cost per bit and increase maximum chip capacity so that flash memory could compete with magnetic storage devices like hard disks.[citation needed]
NOR and NAND flash get their names from the structure of the interconnections between memory cells. [16] In NOR flash, cells are connected in parallel to the bitlines, allowing cells to be read and programmed individually. The parallel connection of cells resembles the parallel connection of transistors in a CMOS NOR gate. In NAND flash, cells are connected in series, resembling a NAND gate. The series connections consume less space than parallel ones, reducing the cost of NAND flash. It does not, by itself, prevent NAND cells from being read and programmed individually.
When NOR flash was developed, it was envisioned as a more economical and conveniently rewritable ROM than contemporary EPROM, EAROM, and EEPROM memories. Thus random-access reading circuitry was necessary. However, it was expected that NOR flash ROM would be read much more often than written, so the write circuitry included was fairly slow and could only erase in a block-wise fashion. On the other hand, applications that use flash as a replacement for disk drives do not require word-level write address, which would only add to the complexity and cost unnecessarily.[citation needed]
Because of the series connection and removal of wordline contacts, a large grid of NAND flash memory cells will occupy perhaps only 60% of the area of equivalent NOR cells [17] (assuming the same CMOS process resolution, e.g. 130nm, 90 nm, 65 nm). NAND flash's designers realized that the area of a NAND chip, and thus the cost, could be further reduced by removing the external address and data bus circuitry. Instead, external devices could communicate with NAND flash via sequential-accessed command and data registers, which would internally retrieve and output the necessary data. This design choice made random-access of NAND flash memory impossible, but the goal of NAND flash was to replace hard disks, not to replace ROMs.
5. 1. Write Endurance
The write endurance of SLC Floating Gate NOR flash is typically equal or greater than that of NAND flash, while MLC NOR & NAND Flash have similar Endurance capabilities. Example Endurance cycle ratings listed in datasheets for NAND and NOR Flash are provided.
• SLC NAND Flash is typically rated at about 100K cycles (Samsung OneNAND KFW4G16Q2M)
• MLC NAND Flash is typically rated at about 5K-10K cycles (Samsung K9G8G08U0M)
• SLC Floating Gate NOR Flash has typical Endurance rating of 100K to 1,000K cycles (Numonyx M58BW 100K; Spansion S29CD016J 1000K)
• MLC Floating Gate NOR Flash has typical Endurance rating of 100K cycles (Numonyx J3 Flash)
However, by applying certain algorithms and design paradigms such as wear-levelling and memory over-provisioning, the endurance of a storage system can be tuned to serve specific requirements. [18]
6. Flash file systems
Main article: Flash file system
Because of the particular characteristics of flash memory, it is best used with either a controller to perform wear-levelling and error correction or specifically designed flash file systems, which spread writes over the media and deal with the long erase times of NOR flash blocks[citation needed]. The basic concept behind flash file systems is: When the flash store is to be updated, the file system will write a new copy of the changed data to a fresh block, remap the file pointers, then erase the old block later when it has time.
In practice, flash file systems are only used for "Memory Technology Devices" ("MTD")[citation needed], which are embedded flash memories that do not have a controller. Removable flash memory cards and USB flash drives have built-in controllers to perform wear-levelling and error correction[citation needed] so use of a specific flash file system does not add any benefit[citation needed].
7. Capacity
Multiple chips are often arrayed to achieve higher capacities for use in consumer electronic devices such as multimedia players or GPS. The capacity of flash chips generally follows Moore's Law because they are manufactured with many of the same integrated circuits techniques and equipment.
Consumer flash drives typically have sizes measured in powers of two (e.g. 512MB, 8GB). This includes SSDs as hard drive replacements[citation needed], even though traditional hard drives tend to use decimal units. Thus, a 64GB SSD is actually 64×10243 bytes. In reality, most users will have slightly less capacity than this available, due to the space taken by filesystem metadata.
In 2005, Toshiba and SanDisk developed a NAND flash chip capable of storing 1GB of data using Multi-level Cell (MLC) technology, capable of storing 2 bits of data per cell. In September 2005, Samsung Electronics announced that it had developed the world’s first 2GB chip. [19]
In March 2006, Samsung announced flash hard drives with a capacity of 4GB, essentially the same order of magnitude as smaller laptop hard drives, and in September 2006, Samsung announced an 8GB chip produced using a 40 nanometer manufacturing process. [20]
In January 2008 Sandisk announced availability of their 16 GB MicroSDHC and 32 GB SDHC Plus cards. [21] [22]
Recently, in 2009, Kingston announced a 256GB Flash Drive that is currently available only in the UK and other parts of Europe.
There are still flash-chips manufactured with capacities under or around 1MB, e.g., for BIOS-ROMs and embedded applications.
8. Transfer rates
NAND flash memory cards are much faster at reading than writing so it is the maximum read speed that is commonly advertised. As a chip wears out, its erase/program operations slow down considerably[citation needed], requiring more retries and bad block remapping. Transferring multiple small files, each smaller than the chip-specific block size, could lead to much a lower rate. Access latency also influences performance, but less so than with their hard drive counterpart.
The speed is sometimes quoted in MB/s (megabytes per second), or as a multiple of that of a legacy single speed CD-ROM, such as 60x, 100x or 150x. Here 1x is equivalent to 150 kilobytes per second. For example, a 100x memory card gives 150KB x 100 = 15,000KB/s = 14.65MB/s.
Performance also depends on the quality of memory controllers. Even when the only change to manufacturing is die-shrink, the absence of an appropriate controller can result in degraded speeds. [23]
9. Applications
9. 1. Serial flash
Serial flash is a small, low-power flash memory that uses a serial interface, typically SPI, for sequential data access. When incorporated into an embedded system, serial flash requires fewer wires on the PCB than parallel flash memories, since it transmits and receives data one bit at a time. This may permit a reduction in board space, power consumption, and total system cost.
There are several reasons why a serial device, with fewer external pins than a parallel device, can significantly reduce overall cost:
• Many ASICs are pad-limited, meaning that the size of the die is constrained by the number of wire bond pads, rather than the complexity and number of gates used for the device logic. Eliminating bond pads thus permits a more compact integrated circuit, on a smaller die; this increases the number of dies that may be fabricated on a wafer, and thus reduces the cost per die.
• Reducing the number of external pins also reduces assembly and packaging costs. A serial device may be packaged in a smaller and simpler package than a parallel device.
• Smaller and lower pin-count packages occupy less PCB area.
• Lower pin-count devices simplify PCB routing.
9. 1. 1. Firmware storage
With the increasing speed of modern CPUs, parallel flash devices are often much slower than the memory bus of the computer they are connected to. Conversely, modern SRAM offers access times below 10 ns, while DDR2 SDRAM offers access times below 20 ns. Because of this, it is often desirable to shadow code stored in flash into RAM; that is, the code is copied from flash into RAM before execution, so that the CPU may access it at full speed. Device firmware may be stored in a serial flash device, and then copied into SDRAM or SRAM when the device is powered-up. [24] Using an external serial flash device rather than on-chip flash removes the need for significant process compromise (a process that is good for high speed logic is generally not good for flash and vice-versa). Once it is decided to read the firmware in as one big block it is common to add compression to allow a smaller flash chip to be used. Typical applications for serial flash include storing firmware for hard drives, Ethernet controllers, DSL modems, wireless network devices, etc.
9. 2. Flash memory as a replacement for hard drives
Main article: Solid-state drive
An obvious extension of flash memory would be as a replacement for hard disks. Flash memory does not have the mechanical limitations and latencies of hard drives, so the idea of a solid-state drive, or SSD, is attractive when considering speed, noise, power consumption, and reliability. Flash drives are considered serious candidates for mobile device secondary storage; they are not yet competitors for hard drives in desktop computers or servers with RAID and SAN architectures.
There remain some aspects of flash-based SSDs that make the idea unattractive. Most important, the cost per gigabyte of flash memory remains significantly higher than that of platter-based hard drives. Although this ratio is decreasing rapidly for flash memory, it is not yet clear that flash memory will catch up to the capacities and affordability offered by platter-based storage. Still, research and development is sufficiently vigorous that it is not clear that it will not happen, either.
There is also some concern that the finite number of erase/write cycles of flash memory would render flash memory unable to support an operating system. This seems to be a decreasing issue as warranties on flash-based SSDs are approaching those of current hard drives. [25] [26]
In June, 2006, Samsung Electronics released the first flash-memory based PCs, the Q1-SSD and Q30-SSD, both of which used 32 GB SSDs, and were at least initially available only in South Korea. [27] Dell Computer introduced a 32GB SSD option on its Latitude D420 and D620 ATG laptops in April 2007—at $549 more than a hard-drive equipped version. [28]
At the Las Vegas CES 2007 Summit Taiwanese memory company A-DATA showcased SSD hard disk drives based on Flash technology in capacities of 32 GB, 64 GB and 128 GB. [29] Sandisk announced an OEM 32 GB 1.8" SSD drive at CES 2007. [30] The XO-1, developed by the One Laptop Per Child (OLPC) association, uses flash memory rather than a hard drive. As of March 2009, a Salt Lake City company called Fusion-io claims the fastest SSD with sequential read/write speeds of 1500 MB/1400 MB's per second. [31]
Rather than entirely replacing the hard drive, hybrid techniques such as hybrid drive and ReadyBoost attempt to combine the advantages of both technologies, using flash as a high-speed cache for files on the disk that are often referenced, but rarely modified, such as application and operating system executable files. Also, Addonics has a PCI adapter for 4 CF cards, [32] creating a RAID-able array of solid-state storage that is much cheaper than the hardwired-chips PCI card kind.
The ASUS Eee PC uses a flash-based SSD of 2 GB to 20 GB, depending on model. The Apple Inc. Macbook Air has the option to upgrade the standard hard drive to a 128 GB Solid State hard drive. The Lenovo ThinkPad X300 also features a built-in 64 GB Solid State Drive. The Apple iPad has flash-based SSD's of 16, 32, and 64 GB.
Sharkoon has developed a device that uses six SDHC cards in RAID-0 as an SSD alternative; users may use more affordable High-Speed 8GB SDHC cards to get similar or better results than can be obtained from traditional SSDs at a lower cost.
10. Industry
One source states that, in 2008, the flash memory industry includes about US$9.1 billion in production and sales. Apple Inc. is the third largest purchaser of flash memory, consuming about 13% of production by itself. [33] Other sources put the flash memory market at a size of more than US$20 billion in 2006, accounting for more than eight percent of the overall semiconductor market and more than 34 percent of the total semiconductor memory market. [34]
11. Flash scalability
[pic]
The aggressive trend of process design rule shrinks in NAND Flash memory technology effectively accelerates Moore's Law.
Due to its relatively simple structure and high demand for higher capacity, NAND flash memory is the most aggressively scaled technology among electronic devices. The heavy competition among the top few manufacturers only adds to the aggressiveness. Current projections show the technology to reach approximately 20 nm by around 2010. While the expected shrink timeline is a factor of two every three years per original version of Moore's law, this has recently been accelerated in the case of NAND flash to a factor of two every two years.
As the feature size of flash memory cells reach the minimum limit (currently estimated ~20 nm), further Flash density increases will be driven by greater levels of MLC, possibly 3-D stacking of transistors, and process improvements. Even with these advances, it may be impossible to economically scale Flash to smaller and smaller dimensions. Many promising new technologies (such as FeRAM, MRAM, PMC, PCM, and others) are under investigation and development as possible more scalable replacements for Flash. [35]
Removable media
In computer storage, removable media refers to storage media which is designed to be removed from the computer without powering the computer off.
Some types of removable media are designed to be read by removable readers and drives. Examples include:
• Optical discs (Blu-ray discs, DVDs, CDs)
• Memory cards (CompactFlash card, Secure Digital card, Memory Stick)
• Floppy disks / Zip disks
• Magnetic tapes
• Paper data storage (punched cards, punched tapes)
Some removable media readers and drives are integrated into computers, others are themselves removable.
Removable media may also refer to some removable storage devices, when they are used to transport or store data. Examples include:
• USB flash drives
• External hard disk drives
Optical disc drive (1/2)
[pic]
In computing, an optical disc drive (ODD) is a disk drive that uses laser light or electromagnetic waves near the light spectrum as part of the process of reading or writing data to or from optical discs. Some drives can only read from discs, but recent drives are commonly both readers and recorders. Recorders are sometimes called burners or writers. Compact discs, DVDs, HD DVDs and Blu-ray discs are common types of optical media which can be read and recorded by such drives.
[pic]
A CD-ROM Drive
Optical disc drives are an integral part of stand-alone consumer appliances such as CD players, DVD players and DVD recorders. They are also very commonly used in computers to read software and consumer media distributed in disc form, and to record discs for archival and data exchange. Optical drives—along with flash memory—have mostly displaced floppy disk drives and magnetic tape drives for this purpose because of the low cost of optical media and the near-ubiquity of optical drives in computers and consumer entertainment hardware.
[pic]
A CD-ROM Drive (without case)
Disc recording is generally restricted to small-scale backup and distribution, being slower and more materially expensive per unit than the moulding process used to mass-manufacture pressed discs.
Contents:
1. Laser and optics
2. Rotational mechanism
3. Loading mechanisms
4. Computer interfaces
5. Compatibility
6. Recording performance
7. Recording schemes
8. See also
9. References
10. External links
Laser and optics
The most important part of an optical disc drive is an optical path, placed in a pickup head (PUH), [1] usually consisting of semiconductor laser, a lens for guiding the laser beam, and photodiodes detecting the light reflection from disc's surface. [2]
[pic]
The CD/DVD drive lens on an Acer laptop
Initially, CD lasers with a wavelength of 780 nm were used, being within infrared range. For DVDs, the wavelength was reduced to 650 nm (red color), and the wavelength for Blu-Ray Disc was reduced to 405 nm (violet color).
Two main servomechanisms are used, the first one to maintain a correct distance between lens and disc, and ensure the laser beam is focused on a small laser spot on the disc. The second servo moves a head along the disc's radius, keeping the beam on a groove, a continuous spiral data path.
On read only media (ROM), during the manufacturing process the groove, made of pits, is pressed on a flat surface, called land. Because the depth of the pits is approximately one-quarter to one-sixth of the laser's wavelength, the reflected beam's phase is shifted in relation to the incoming reading beam, causing mutual destructive interference and reducing the reflected beam's intensity. This is detected by photodiodes that output electrical signals.
A recorder encodes (or burns) data onto a recordable CD-R, DVD-R, DVD+R, or BD-R disc (called a blank) by selectively heating parts of an organic dye layer with a laser[citation needed]. This changes the reflectivity of the dye, thereby creating marks that can be read like the pits and lands on pressed discs. For recordable discs, the process is permanent and the media can be written to only once. While the reading laser is usually not stronger than 5 mW, the writing laser is considerably more powerful. The higher writing speed, the less time a laser has to heat a point on the media, thus its power has to increase proportionally. DVD burner's laser often peaks at about 100 mW in continuous wave, and 225 mW pulsed.
For rewritable CD-RW, DVD-RW, DVD+RW, DVD-RAM, or BD-RE media, the laser is used to melt a crystalline metal alloy in the recording layer of the disc. Depending on the amount of power applied, the substance may be allowed to melt back (change the phase back) into crystalline form or left in an amorphous form, enabling marks of varying reflectivity to be created.
Double-sided media may be used, but they are not easily accessed with a standard drive, as they must be physically turned over to access the data on the other side.
Double layer (DL) media have two independent data layers separated by a semi-reflective layer. Both layers are accessible from the same side, but require the optics to change the laser's focus. Traditional single layer (SL) writable media are produced with a spiral groove molded in the protective polycarbonate layer (not in the data recording layer), to lead and synchronize the speed of recording head. Double-layered writable media have: a first polycarbonate layer with a (shallow) groove, a first data layer, a semi-reflective layer, a second (spacer) polycarbonate layer with another (deep) groove, and a second data layer. The first groove spiral usually starts on the inner edge and extends outwards, while the second groove starts on the outer edge and extends inwards.
Some drives support Hewlett-Packard's LightScribe photothermal printing technology for labeling specially coated discs.
2. Rotational mechanism
Optical drives' rotational mechanism differs considerably from hard disk drives', in that the latter keep a constant angular velocity (CAV), in other words a constant number of revolutions per minute (RPM). With CAV, a higher throughput is generally achievable at an outer disc area, as compared to inner area.
On the other hand, optical drives were developed with an assumption of achieving a constant throughput, in CD drives initially equal to 150 KiB/s. It was a feature important for streaming audio data that always tend to require a constant bit rate. But to ensure no disc capacity is wasted, a head had to transfer data at a maximum linear rate at all times too, without slowing on the outer rim of disc. This had led to optical drives—until recently—operating with a constant linear velocity (CLV). The spiral groove of the disc passed under its head at a constant speed. Of course the implication of CLV, as opposed to CAV, is that disc angular velocity is no longer constant, and spindle motor need to be designed to vary speed between 200 RPM on the outer rim and 500 RPM on the inner rim.
described as a base speed. As a result, a 4X drive, for instance, would rotate at 800-2000 RPM, while transferring data steadily at 600 KiB/s, which is equal to 4 x 150 KiB/s.
For DVD base speed, or "1x speed", is 1.385 MB/s, equal to 1.32 MiB/s, approximately 9 times faster than CD's base speed. For Blu-ray drive base speed is 6.74 MB/s, equal to 6.43 MiB/s.
There are mechanical limits to how quickly a disc can be spun. Beyond a certain rate of rotation, around 10000 RPM, centrifugal stress can cause the disc plastic to creep and possibly shatter. On the outer edge of the CD disc, 10000 RPM limitation roughly equals to 52x speed, but on the inner edge only to 20x. Some drives further lower their maximum read speed to around 40x on the reasoning that blank discs will be clear of structural damage, but that discs inserted for reading may not be. Without higher rotational speeds, increased read performance may be attainable by simultaneously reading more than one point of a data groove [3] , but drives with such mechanisms are more expensive, less compatible, and very uncommon.
[pic]
The Z-CLV recording strategy is easily visible after burning a DVD-R.
Because keeping a constant transfer rate for the whole disc is not so important in most contemporary CD uses, to keep the rotational speed of the disc safely low while maximizing data rate, a pure CLV approach needed to be abandoned. Some drives work in partial CLV (PCLV) scheme, by switching from CLV to CAV only when a rotational limit is reached. But switching to CAV requires considerable changes in hardware design, so instead most drives use the zoned constant linear velocity (Z-CLV) scheme. This divides the disc into several zones, each having its own different constant linear velocity. A Z-CLV recorder rated at "52X", for example, would write at 20X on the innermost zone and then progressively increase the speed in several discrete steps up to 52X at the outer rim.
3. Loading mechanisms
Current optical drives use either a tray-loading mechanism, where the disc is loaded onto a motorised or manually operated tray, or a slot-loading mechanism, where the disc is slid into a slot and drawn in by motorized rollers. Slot-loading drives have the disadvantage that they cannot usually accept the smaller 80 mm discs or any non-standard sizes; however, the Wii and PlayStation 3 video game consoles seem to have defeated this problem, for they are able to load standard size DVDs and 80 mm discs in the same slot-loading drive.
A small number of drive models, mostly compact portable units, have a top-loading mechanism where the drive lid is opened upwards and the disc is placed directly onto the spindle. [4] These sometimes have the advantage of using spring-loaded ball bearings to hold the disc in place, minimizing damage to the disc if the drive is moved while it is spun up.
Some early CD-ROM drives used a mechanism where CDs had to be inserted into special cartridges or caddies, somewhat similar in appearance to a 3.5" floppy diskette. This was intended to protect the disc from accidental damage by enclosing it in a tougher plastic casing, but did not gain wide acceptance due to the additional cost and compatibility concerns—such drives would also inconveniently require "bare" discs to be manually inserted into an openable caddy before use.
4. Computer interfaces
[pic]
Digital audio output, analog audio output, and parallel ATA interface.
Most internal drives for personal computers, servers and workstations are designed to fit in a standard 5.25" drive bay and connect to their host via an ATA or SATA interface. Additionally, there may be digital and analog outputs for Red Book audio. The outputs may be connected via a header cable to the sound card or the motherboard. External drives usually have USB or FireWire interfaces. Some portable versions for laptop use power themselves off batteries or off their interface bus.
Drives with SCSI interface exist, but are less common and tend to be more expensive, because of the cost of their interface chipsets and more complex SCSI connectors.
When the optical disc drive was first developed, it was not easy to add to computer systems. Some computers such as the IBM PS/2 were standardizing on the 3.5" floppy and 3.5" hard disk, and did not include a place for a large internal device. Also IBM PCs and clones at first only included a single ATA drive interface, which by the time the CDROM was introduced, was already being used to support two hard drives. Early laptops simply had no built-in high-speed interface for supporting an external storage device.
This was solved through several techniques:
• Early sound cards could include a second ATA interface, though it was often limited to supporting a single optical drive and no hard drives. This evolved into the modern second ATA interface included as standard equipment
• A parallel port external drive was developed that connected between a printer and the computer. This was slow but an option for laptops
• A PCMCIA optical drive interface was also developed for laptops
• A SCSI card could be installed in desktop PCs for an external SCSI drive enclosure, though SCSI was typically much more expensive than other options
5. Compatibility
Most optical drives are backwards compatible with their ancestors up to CD, although this is not required by standards.
Compared to a CD's 1.2 mm layer of polycarbonate, a DVD's laser beam only has to penetrate 0.6 mm in order to reach the recording surface. This allows a DVD drive to focus the beam on a smaller spot size and to read smaller pits. DVD lens supports a different focus for CD or DVD media with same laser.
| |Pressed CD |CD-R |CD-RW |
|PC Card |PCMCIA |85.6 × 54 × 3.3 mm |Yes |
|CompactFlash I |CF-I |43 × 36 × 3.3 mm |Yes |
|CompactFlash II |CF-II |43 × 36 × 5.5 mm |Yes |
|SmartMedia |SM / SMC |45 × 37 × 0.76 mm |Yes |
|Memory Stick |MS |50.0 × 21.5 × 2.8 mm |No (MagicGate) |
|Memory Stick Duo |MSD |31.0 × 20.0 × 1.6 mm |No (MagicGate) |
|Memory Stick PRO Duo |MSPD |31.0 × 20.0 × 1.6 mm |No (MagicGate) |
|Memory Stick PRO-HG Duo |MSPDX |31.0 × 20.0 × 1.6 mm |No (MagicGate) |
|Memory Stick Micro M2 |M2 |15.0 × 12.5 × 1.2 mm |No (MagicGate) |
|Miniature Card | |37 x 45 x 3.5 mm |Yes |
|Multimedia Card |MMC |32 × 24 × 1.5 mm |Yes |
|Reduced Size Multimedia Card |RS-MMC |16 × 24 × 1.5 mm |Yes |
|MMCmicro Card |MMCmicro |12 × 14 × 1.1 mm |Yes |
|Secure Digital card |SD |32 × 24 × 2.1 mm |No (CPRM) |
|SxS |SxS | | |
|Universal Flash Storage |UFS | | |
|miniSD card |miniSD |21.5 × 20 × 1.4 mm |No (CPRM) |
|microSD card |microSD |15 × 11 × 0.7 mm |No (CPRM) |
|xD-Picture Card |xD |20 × 25 × 1.7 mm |Yes |
|Intelligent Stick |iStick |24 x 18 x 2.8 mm |Yes |
|Serial Flash Module |SFM |45 x 15 mm |Yes |
|µ card |µcard |32 x 24 x 1 mm |Unknown |
|NT Card |NT NT+ |44 x 24 x 2.5 mm |Yes |
[pic]Secure Digital card (SD)
[pic]MiniSD Card
[pic]CompactFlash (CF-I)
[pic]Memory Stick
[pic]MultiMediaCard (MMC)
[pic]SmartMedia
[pic]xD-Picture Card (xD)
3. Overview of all memory card types
Main article: Comparison of memory cards
• PCMCIA ATA Type I Flash Memory Card (PC Card ATA Type I)
o PCMCIA Type II, Type III cards
• CompactFlash Card (Type I), CompactFlash High-Speed
• CompactFlash Type II, CF+(CF2.0), CF3.0
o Microdrive
• MiniCard (Miniature Card) (max 64 MB (64 MiB))
• SmartMedia Card (SSFDC) (max 128 MB) (3.3 V,5 V)
• xD-Picture Card, xD-Picture Card Type M
• Memory Stick, MagicGate Memory Stick (max 128 MB); Memory Stick Select, MagicGate Memory Stick Select ("Select" means: 2x128 MB with A/B switch)
• SecureMMC
• Secure Digital (SD Card), Secure Digital High-Speed, Secure Digital Plus/Xtra/etc (SD with USB connector)
o miniSD card
o microSD card (aka Transflash, T-Flash)
o SDHC
• MU-Flash (Mu-Card) (Mu-Card Alliance of OMIA)
• C-Flash
• SIM card (Subscriber Identity Module)
• Smart card (ISO/IEC 7810, ISO/IEC 7816 card standards, etc.)
• UFC (USB FlashCard) [1] (uses USB)
• FISH Universal Transportable Memory Card Standard (uses USB)
• Disk memory cards:
o Clik! (PocketZip), (40 MB PocketZip)
o Floppy disk (32MB, LS120 and LS240, 2-inch, 3.5-inch, etc.)
• Intelligent Stick (iStick, a USB-based flash memory card with MMS)
• SxS (S-by-S) memory card, a new memory card specification developed by Sandisk and Sony. SxS complies to the ExpressCard industry standard. [2]
• Nexflash Winbond Serial Flash Module (SFM) cards, size range 1 mb, 2 mb and 4 mb.
Floppy disk (1/4)
[pic]
A floppy disk is a data storage medium that is composed of a disk of thin, flexible ("floppy") magnetic storage medium encased in a square or rectangular plastic shell.
Floppy disks are read and written by a floppy disk drive or FDD, the initials of which should not be confused with "fixed disk drive", which is another term for a (nonremovable) type of hard disk drive. Invented by the American information technology company IBM, floppy disks in 8 inch, 5¼ inch and 3½ inch forms enjoyed nearly three decades as a popular and ubiquitous form of data storage and exchange, from the mid-1970s to the late 1990s. While floppy disk drives still have some limited uses, especially with legacy industrial computer equipment, [2] they have now been superseded by USB flash drives, external hard disk drives, CDs, DVDs, and memory cards.
[pic]
8-inch, 5¼-inch, and 3½-inch floppy disks
Contents:
1. Usage
2. Disk formats
3. History
4. Structure
5. Legacy
6. Compatibility
7. More on floppy disk formats
8. Usability
9. See also
10. References
11. Bibliography
12. External links
|Floppy Disk Drives |
|[pic] |
|8-inch, 5¼-inch (full height), and 3½-inch drives |
|Date invented |1969 (8-inch), |
| |1976 (5¼-inch), |
| |1982 (3½-inch) |
|Invented by |IBM team led by David L. Noble [1] |
|Connects to |Controller via: |
| |Cable |
1. Usage
The flexible magnetic disk, or diskette revolutionized computer disk storage in the 1970s. Diskettes, which were often called floppy disks or floppies, became ubiquitous in the 1980s and 1990s in their use with personal computers and home computers to distribute software, transfer data, and create backups.
Before hard disks became affordable, floppy disks were often also used to store a computer's operating system (OS), in addition to application software and data. Most home computers had a primary OS (and often BASIC) stored permanently in on-board ROM, with the option of loading a more advanced disk operating system from a floppy, whether it be a proprietary system, CP/M, or later, DOS.
By the early 1990s, the increasing size of software meant that many programs demanded multiple diskettes; a large package like Windows or Adobe Photoshop could use a dozen disks or more. By 1996, there were an estimated 5 billion floppy disks in use. [3] Toward the end of the 1990s, distribution of larger packages therefore gradually switched to CD-ROM (or online distribution for smaller programs).
Mechanically incompatible higher-density formats were introduced (e.g. the Iomega Zip drive) and were briefly popular, but adoption was limited by the competition between proprietary formats, and the need to buy expensive drives for computers where the media would be used. In some cases, such as with the Zip drive, the failure in market penetration was exacerbated by the release of newer higher-capacity versions of the drive and media that were not backward compatible with the original drives, thus fragmenting the user base between new users and early adopters who were unwilling to pay for an upgrade so soon. A chicken or the egg scenario ensued, with consumers wary of making costly investments into unproven and rapidly changing technologies, with the result that none of the technologies were able to prove themselves and stabilize their market presence. Soon, inexpensive recordable CDs with even greater capacity, which were also compatible with an existing infrastructure of CD-ROM drives, made the new floppy technologies redundant. The last advantage of floppy disks, reusability, was countered by re-writable CDs. Later, advancements in flash-based devices and widespread adoption of the USB interface provided another alternative that, in turn, made even optical storage obsolete for some purposes.
An attempt to continue the traditional diskette was the SuperDisk (LS-120) in the late 1990s, with a capacity of 120 MB [4] which was backward compatible with standard 3½-inch floppies. For some time, PC manufacturers were reluctant to remove the floppy drive because many IT departments appreciated a built-in file transfer mechanism (dubbed Sneakernet) that always worked and required no device driver to operate properly. However, manufacturers and retailers have progressively reduced the availability of computers fitted with floppy drives and of the disks themselves. Widespread built-in operating system support for USB flash drives, and even BIOS boot support for such devices on most modern systems, has helped this process along.
[pic]
Imation USB Floppy Drive, model 01946. An external drive that accepts high-density disks.
External USB-based floppy disk drives are available for computers without floppy drives, and they work on any machine that supports USB Mass Storage Devices. Many modern systems even provide firmware support for booting to a USB-mounted floppy drive. However these drives can't handle anything but the common 80-track MFM format. Which means that formats used by C64, Amiga, Macintosh, etc. can't be read by these devices.
2. Disk formats
Floppy sizes are almost universally referred to in imperial measurements, even in countries where metric is the standard, and even when the size is in fact defined in metric (for instance the 3½-inch floppy, which is actually 90 mm). Formatted capacities are generally set in terms of Kilobytes (1024 bytes, as 1 sector is generally 512 bytes), written as "KB". For more information see below.
|Historical sequence of floppy disk formats, including the last format to be generally adopted — the "High Density" 3½-inch HD |
|floppy, introduced 1987. |
|Disk format |Year introduced |Formatted |Marketed |
| | |Storage capacity |capacity¹ |
| | |in KB (1024 bytes) if not stated | |
|8-inch - IBM 23FD (read-only) |1971 |79.7 [5] |? |
|8-inch - Memorex 650 |1972 |175 kB [6] |1.5 megabit [6] [unformatted] |
|8-inch - SSSD |1973 |237.25 [7] [8] |3.1 Mbits unformatted |
|IBM 33FD / Shugart 901 | | | |
|8-inch - DSSD |1976 |500.5 [9] |6.2 Mbits unformatted |
|IBM 43FD / Shugart 850 | | | |
|5¼-inch (35 track) |1976 [10] |89.6 kB [11] |110 kB |
|Shugart SA 400 | | | |
|8-inch DSDD |1977 |980 (CP/M) |1.2 MB |
|IBM 53FD / Shugart 850 | |- 1200 (MS-DOS FAT) | |
|5¼-inch DD |1978 |360 or 800 |360 KB |
|5¼-inch |1978 |113.75 |113 KB |
|Apple Disk II (Pre-DOS 3.3) | |(256 byte sectors, 13 sectors/track, 35| |
| | |tracks) | |
|5¼-inch |1980 |140 |140 KB |
|Apple Disk II (DOS 3.3) | |(256 byte sectors, 16 sectors/track, 35| |
| | |tracks) | |
|3½-inch |1982 |280 |264 kB |
|HP single sided | | | |
|3-inch |1982 [12] [13] |360[citation needed] |125 kB (SS/SD), 500 kB (DS/DD) |
| | | |[13] |
|3½-inch (DD at release) |1983 [14] |720 (400 SS, 800 DS on Macintosh, 880 |1 MB |
| | |DS on Amiga) | |
|5¼-inch QD | |720 |720 KB |
|5¼-inch HD |1982 YE Data YD380 [15] |1,182,720 bytes |1.2 MB |
|3-inch DD |1984[citation needed] |720[citation needed] |? |
|3-inch |1985 |128 to 256 |? |
|Mitsumi Quick Disk | | | |
|2-inch |1985[citation needed] |720[citation needed] |? |
|2½-inch |1986 [16] |? |? |
|5¼-inch Perpendicular |1986 [16] |10 MB |? |
|3½-inch HD |1987 |1440 |1.44 MB (2.0 MB unformatted) |
|3½-inch ED |1987 [17] |2880 |2.88 MB |
|3½-inch Floptical (LS) |1991 |21000 |21 MB |
|3½-inch LS-120 |1996 |120.375 MB |120 MB |
|3½-inch LS-240 |1997 |240.75 MB |240 MB |
|3½-inch HiFD |1998/99 |150/200 MB[citation needed] |150/200 MB |
|Abbreviations: DD = Double Density; QD = Quad Density; HD = High Density; ED = Extended Density; LS = Laser Servo; HiFD = High |
|capacity Floppy Disk; SS = Single Sided; DS = Double Sided |
|¹ The formatted capacities of floppy disks frequently corresponded only vaguely to their capacities as marketed by drive and |
|media companies, due to differences between formatted and unformatted capacities and also due to the non-standard use of binary |
|prefixes in labeling and advertising floppy media. The erroneous "1.44 MB" value for the 3½-inch HD floppies is the most widely |
|known example. See Ultimate capacity and speed. |
|Dates and capacities marked ? are of unclear origin and need source information; other listed capacities refer to: |
|Formatted Storage Capacity is total size of all sectors on the disk: |
|For 8-inch see Table of 8-inch floppy formats IBM 8-inch formats. Note that spare, hidden and otherwise reserved sectors are |
|included in this number. |
|For 5¼- and 3½-inch capacities quoted are from subsystem or system vendor statements. |
|Marketed Capacity is the capacity, typically unformatted, by the original media OEM vendor or in the case of IBM media, the |
|first OEM thereafter. Other formats may get more or less capacity from the same drives and disks. |
3. History
Main article: History of the floppy disk
[pic]
8-inch disk drive with diskette (3½-inch disk for comparison)
The earliest floppy disks, invented at IBM, were 8 inches in diameter. They became commercially available in 1971. [1] [18] Disks in this form factor were produced and improved upon by IBM and other companies such as Memorex, Shugart Associates, and Burroughs Corporation. [19]
[pic]
A BASF double-density 5¼-inch diskette.
In 1976 Shugart Associates introduced the first 5¼-inch FDD and associated media. By 1978 there were more than 10 manufacturers producing 5¼-inch FDDs, in competing disk formats: hard or soft sectored with various encoding schemes such as FM, MFM and GCR. The 5¼-inch formats quickly displaced the 8-inch for most applications, and the 5¼-inch hard-sectored disk format eventually disappeared.
In 1984, IBM introduced the 1.2 megabyte dual sided floppy disk along with its AT model. Although often used as backup storage, the high density floppy was not often used by software manufacturers for interchangeability. In 1986, IBM began to use the 720 kB double density 3.5" microfloppy disk on its Convertible laptop computer. It introduced the so-called "1.44 MB" high density version with the PS/2 line. These disk drives could be added to existing older model PCs. In 1988 IBM introduced a drive for 2.88 MB "DSED" diskettes in its top-of-the-line PS/2 models; it was a commercial failure.
Throughout the early 1980s the limitations of the 5¼-inch format were starting to become clear. Originally designed to be smaller and more practical than the 8-inch format, the 5¼-inch system was itself too large, and as the quality of the recording media grew, the same amount of data could be placed on a smaller surface.[citation needed]
A number of solutions were developed, with drives at 2-inch, 2½-inch, 3-inch and 3½-inch (50, 60, 75 and 90 mm) all being offered by various companies.[citation needed] They all shared a number of advantages over the older format, including a small form factor and a rigid case with a slidable write protection tab. The almost-universal use of the 5¼-inch format made it very difficult for any of these new formats to gain any significant market share.[citation needed]
[pic]
3½-inch, high-density diskette affixed with a blank adhesive label. The diskette's write-protection tab is deactivated.
Sony introduced its own small-format 90.0 mm × 94.0 mm disk.; however, this format suffered from a fate similar to the other new formats: the 5¼-inch format simply had too much market share. A variant on the Sony design, introduced in 1982 by a large number of manufacturers, was then rapidly adopted. By 1988 the 3½-inch was outselling the 5¼-inch. [20]
By the end of the 1980s, the 5¼-inch disks had been superseded by the 3½-inch disks. Though 5¼-inch drives were still available, as were disks, they faded in popularity as the 1990s began. By the mid-1990s the 5¼-inch drives had virtually disappeared as the 3½-inch disk became the predominant floppy disk. One of the chief advantages of the 3½-inch disk, besides its smaller size which allows it to fit in a shirt pocket, is its plastic case, which gives it good protection from dust, liquids, fingerprints, scratches, sunlight, warping, and other environmental risks.
3. 1. Standard floppy replacements
Through the early 1990s a number of attempts were made by various companies to introduce newer floppy-like formats based on the now-universal 3½-inch physical format. Most of these systems provided the ability to read and write standard DD and HD disks, while at the same time introducing a much higher-capacity format as well. There were a number of times where it was felt that the existing floppy was just about to be replaced by one of these newer devices, but a variety of problems ensured this never took place. None of these ever reached the point where it could be assumed that every current PC would have one, and they have now largely been replaced by CD and DVD burners and USB flash drives.
The main technological change was the addition of tracking information on the disk surface to allow the read/write heads to be positioned more accurately. Normal disks have no such information, so the drives use the tracks themselves with a feedback loop in order to center themselves. The newer systems generally used marks burned onto the surface of the disk to find the tracks, allowing the track width to be greatly reduced.
3. 1. 1. Flextra
As early as 1988, Brier Technology introduced the Flextra BR 3020, which boasted 21.4 MB (marketing, true size was 21,040 KB, [21] 25 MB unformatted). Later the same year it introduced the BR3225, which doubled the capacity. This model could also read standard 3½-inch disks.
It used 3½-inch standard disks which had servo information embedded on them for use with the Twin Tier Tracking technology.
3. 1. 2. Original Floptical
In 1991, Insite Peripherals introduced the "Floptical," which used an infra-red LED to position the heads over marks in the disk surface. The original drive stored 21 MB, while also reading and writing standard DD and HD floppies. In order to improve data transfer speeds and make the high-capacity drive usefully quick as well, the drives were attached to the system using a SCSI connector instead of the normal floppy controller. This made them appear to the operating system as a hard drive instead of a floppy, meaning that most PCs were unable to boot from them. This again adversely affected pickup rates.
Insite licenced their technology to a number of companies, who introduced compatible devices as well as even larger-capacity formats. The most popular of these, by far, was the LS-120, mentioned below.
Zip drive
In 1994, Iomega introduced the Zip drive. Although it was not true to the 3½-inch form factor (hence not compatible with the standard 1.44 MB floppies), it still became the most popular of the "super floppies". It boasted 100 MB, later 250 MB, and then 750 MB of storage. Though Zip drives gained in popularity for several years they never reached the same market penetration as standard floppy drives, since only some new computers were sold with the drives. Eventually the falling prices of CD-R and CD-RW media and USB flash drives, along with notorious hardware failures (the so-called "click of death"), reduced the popularity of the Zip drive.
A major reason for the failure of the Zip Drives is also attributed to the higher pricing they carried (partly because of royalties that 3rd-party manufacturers of drives and disks had to pay). However, hardware vendors such as Hewlett Packard, Dell and Compaq had promoted the same at a very high level.[original research?] Zip drive media was primarily popular for the excellent storage density and drive speed they carried, but were always overshadowed by the price.
3. 1. 4. LS-120
Announced in 1995, the "SuperDisk" drive, often seen with the brand names Matsushita (Panasonic) and Imation, had an initial capacity of 120 MB (120.375 MB) [22] using even higher density "LS-120" disks.
It was upgraded (as the "LS-240") to 240 MB (240.75 MB). Not only could the drive read and write 1440 kB disks, but the last versions of the drives could write 32 MB onto a normal 1440 kB disk (see note below). Unfortunately, popular opinion held the Super Disk disks to be quite unreliable,[citation needed] though no more so than the Zip drives and SyQuest Technology offerings of the same period and there were also many reported problems moving standard floppies between LS-120 drives and normal floppy drives.[citation needed] This belief, true or otherwise, crippled adoption. The BIOS of many motherboards even to this day supports LS-120 drives as boot options.
LS-120 compatible drives were available as options on many computers, including desktop and notebook computers from Compaq Computer Corporation. In the case of the Compaq notebooks, the LS-120 drive replaced the standard floppy drive in a multibay configuration.
3. 1. 5. Sony HiFD
Sony introduced its own floptical-like system in 1997 as the "150 MB Sony HiFD" which could hold 150 megabytes (157.3 actual megabytes) of data. Although by this time the LS-120 had already garnered some market penetration, industry observers nevertheless confidently predicted the HiFD would be the real standard-floppy-killer and finally replace standard floppies in all machines.
After only a short time on the market the product was pulled, as it was discovered there were a number of performance and reliability problems that made the system essentially unusable. Sony then re-engineered the device for a quick re-release, but then extended the delay well into 1998 instead, and increased the capacity to "200 MB" (approximately 210 megabytes) while they were at it. By this point the market was already saturated by the Zip disk, so it never gained much market share.
3. 1. 6. Caleb Technology’s UHD144
The UHD144 drive surfaced early in 1998 as the it drive, and provided 144 MB of storage while also being compatible with the standard 1.44 MB floppies. The drive was slower than its competitors but the media were cheaper, running about 8 US$ at introduction and 5 US$ soon after.
4. Structure
[pic]
A user inserts the floppy disk, medium opening first, into a 5¼-inch floppy disk drive (pictured, an internal model) and moves the lever down (by twisting on this model) to close the drive and engage the motor and heads with the disk.
The 5¼-inch disk had a large circular hole in the center for the spindle of the drive and a small oval aperture in both sides of the plastic to allow the heads of the drive to read and write the data. The magnetic medium could be spun by rotating it from the middle hole. A small notch on the right hand side of the disk would identify that the disk was writable, detected by a mechanical switch or photo transistor above it. If this notch was not present, the disk was treated as read-only. (Punch devices were sold to convert read-only disks to writable ones. Tape could be used over the notch to effect protection of writable disks from unwanted writing.)
Another LED/photo-transistor pair located near the center of the disk could detect a small hole once per rotation, called the index hole, in the magnetic disk. It was used to detect the start of each track, and whether or not the disk rotated at the correct speed; some operating systems, such as Apple DOS, did not use index sync, and often the drives designed for such systems lacked the index hole sensor. Disks of this type were said to be soft sector disks. Very early 8-inch and 5¼-inch disks also had physical holes for each sector, and were termed hard sector disks.
Inside the disk were two layers of fabric designed to reduce friction between the medium and the outer casing, with the medium sandwiched in the middle. The outer casing was usually a one-part sheet, folded double with flaps glued or spot-welded together. A catch was lowered into position in front of the drive to prevent the disk from emerging, as well as to raise or lower the spindle (and, in two-sided drives, the upper read/write head).
The 8-inch disk was very similar in structure to the 5¼-inch disk, with the exception that the read-only logic was in reverse: the slot on the side had to be taped over to allow writing.
The 3½-inch disk is made of two pieces of rigid plastic, with the fabric-medium-fabric sandwich in the middle to remove dust and dirt. The front has only a label and a small aperture for reading and writing data, protected by a spring-loaded metal or plastic cover, which is pushed back on entry into the drive.
[pic]
The 5¼-inch 1.2 MB floppy disk drive
[pic]
The 3½-inch 2.88 MB floppy disk drive
Newer 5¼-inch drives and all 3½-inch drives automatically engage when the user inserts a disk, and disengage and eject with the press of the eject button. On Apple Macintosh computers with built-in floppy drives, the disk is ejected by a motor (similar to a VCR) instead of manually; there is no eject button. The disk's desktop icon is dragged onto the Trash icon to eject a disk.
The reverse has a similar covered aperture, as well as a hole to allow the spindle to connect into a metal plate glued to the medium. Two holes, bottom left and right, indicate the write-protect status and high-density disk correspondingly, a hole meaning protected or high density, and a covered gap meaning write-enabled or low density. (Incidentally, the write-protect and high-density holes on a 3½-inch disk are spaced exactly as far apart as the holes in punched A4 paper (8 cm), allowing write-protected floppies to be clipped into standard ring binders.) A notch top right ensures that the disk is inserted correctly, and an arrow top left indicates the direction of insertion. The drive usually has a button that, when pressed, will spring the disk out at varying degrees of force. Some will barely make it out of the disk drive; others will shoot out at a fairly high speed. In a majority of drives, the ejection force is provided by the spring that holds the cover shut, and therefore the ejection speed is dependent on this spring. In PC-type machines, a floppy disk can be inserted or ejected manually at any time (evoking an error message or even lost data in some cases), as the drive is not continuously monitored for status and so programs can make assumptions that do not match actual status (e.g., disk 123 is still in the drive and has not been altered by any other agency).
[pic]
A 3″ floppy disk used on Amstrad CPC machines
With Apple Macintosh computers, disk drives are continuously monitored by the OS; a disk inserted is automatically searched for content, and one is ejected only when the software agrees the disk should be ejected. This kind of disk drive (starting with the slim "Twiggy" drives of the late Apple "Lisa") does not have an eject button, but uses a motorized mechanism to eject disks; this action is triggered by the OS software (e.g., the user dragged the "drive" icon to the "trash can" icon). Should this not work (as in the case of a power failure or drive malfunction), one can insert a straightened paper clip into a small hole at the drive's front, thereby forcing the disk to eject (similar to that found on CD-DVD drives). External 3.5" floppy drives from Apple were equipped with eject buttons. The button was ignored when the drive was plugged into a Mac, but would eject the disk if the drive was used with an Apple II, as ProDOS did not support or implement software-controlled eject. Some other computer designs (such as the Commodore Amiga) monitor for a new disk continuously but still have push-button eject mechanisms.
The 3-inch disk, widely used on Amstrad CPC machines, bears much similarity to the 3½-inch type, with some unique and somewhat curious features. One example is the rectangular-shaped plastic casing, almost taller than a 3½-inch disk, but narrower, and more than twice as thick, almost the size of a standard compact audio cassette. This made the disk look more like a greatly oversized present day memory card or a standard PC card notebook expansion card rather than a floppy disk. Despite the size, the actual 3-inch magnetic-coated disk occupied less than 50% of the space inside the casing, the rest being used by the complex protection and sealing mechanisms implemented on the disks. Such mechanisms were largely responsible for the thickness, length and high costs of the 3-inch disks. On the Amstrad machines the disks were typically flipped over to use both sides, as opposed to being truly double-sided. Double-sided mechanisms were available but rare.
5. Legacy
The advent of other portable storage options, such as USB storage devices, SD Cards, recordable CDs and DVDs, and the rise of multi-megapixel digital photography encouraged the creation and use of files larger than most 3½-inch disks could hold. Additionally, the increasing availability of broadband and wireless Internet connections decreased the overall utility of removable storage devices (humorously named sneakernet).
In 1991, Commodore introduced the CDTV, which used a CD-ROM drive in place of the floppy drive. The majority of AmigaOS was stored in read-only memory, making it easier to boot from a CD-ROM rather than floppy.
In 1998, Apple introduced the iMac which had no floppy drive. This made USB-connected floppy drives a popular accessory for the early iMacs, since the basic model of iMac at the time had only a CD-ROM drive, giving users no easy access to writable removable media. This transition away from standard floppies was relatively easy for Apple, since all Macintosh models that were originally designed to use a CD-ROM drive were able to boot and install their operating system from CD-ROM early on.
In February 2003, Dell, Inc. announced that they would no longer include standard floppy drives on their Dell Dimension home computers as standard equipment, although they are available as a selectable option [23] [24] for around US$20 and can be purchased as an aftermarket OEM add-on anywhere from US$5-25.
On 29 January 2007 the British computer retail chain PC World issued a statement saying that only 2% of the computers that they sold contained a built-in floppy disk drive and, once present stocks were exhausted, no more standard floppies would be sold. [25] [26] [27]
In 2009, Hewlett-Packard stopped supplying standard floppy drives on business desktops.[citation needed]
Floppies are still used for emergency boots in aging systems which lack support for other bootable media. They can also be used for BIOS updates since most BIOS and firmware programs can still be executed from bootable floppy disks. Furthermore, if a BIOS update fails or becomes corrupted somehow, floppy drives can be used to perform a recovery. The music and theatre industries still use equipment (i.e. synthesizers, samplers, drum machines, sequencers, and lighting consoles) that requires standard floppy disks as a storage medium.
5. 1. Use as icon for saving
[pic]
Screenshot of the toolbar in , highlighting the Save icon, a floppy disk.
For more than two decades, the floppy disk was the primary external writable storage device used. Also, in a non-network environment, floppies were once the primary means of transferring data between computers. Floppy disks are also, unlike hard disks, handled and seen; even a novice user can identify a floppy disk. Because of all these factors, the image of the floppy disk has become a metaphor for saving data, and the floppy disk symbol is often seen in programs on buttons and other user interface elements related to saving files, even though such disks are obsolete. [28]
6. Compatibility
In general, different physical sizes of floppy disks are incompatible by definition, and disks can be loaded only on the correct size of drive. There were some drives available with both 3½-inch and 5¼-inch slots that were popular in the transition period between the sizes.
However, there are many more subtle incompatibilities within each form factor. For example, all but the earliest models of Apple Macintosh computers that have built-in floppy drives included a disk controller that can read, write and format IBM PC-format 3½-inch diskettes. However, few IBM-compatible computers use floppy disk drives that can read or write disks in Apple's variable speed format. For details on this, see the section More on floppy disk formats.
6. 1. 3½-inch floppy disk
Within the world of IBM-compatible computers, the three densities of 3½-inch floppy disks are partially compatible. Higher density drives are built to read, write and even format lower density media without problems, provided the correct media are used for the density selected. However, if by whatever means a diskette is formatted at the wrong density, the result is a substantial risk of data loss due to magnetic mismatch between oxide and the drive head's writing attempts. Still, a fresh diskette that has been manufactured for high density use can theoretically be formatted as double density, but only if no information has ever been written on the disk using high density mode (for example, HD diskettes that are pre-formatted at the factory are out of the question). The magnetic strength of a high density record is stronger and will "overrule" the weaker lower density, remaining on the diskette and causing problems. However, in practice there are people who use downformatted (ED to HD, HD to DD) or even overformatted (DD to HD) without apparent problems. Doing so always constitutes a data risk, so one should weigh out the benefits (e.g. increased space or interoperability) versus the risks (data loss, permanent disk damage).
The holes on the right side of a 3½-inch disk can be altered as to 'fool' some disk drives or operating systems (others such as the Acorn Archimedes simply do not care about the holes) into treating the disk as a higher or lower density one, for backward compatibility or economical reasons[citation needed]. Possible modifications include:
• Drilling or cutting an extra hole into the right-lower side of a 3½-inch DD disk (symmetrical to the write-protect hole) in order to format the DD disk into a HD one. This was a popular practice during the early 1990s, as most people switched to HD from DD during those days and some of them "converted" some or all of their DD disks into HD ones, for gaining an extra "free" 720 KB of disk space. There even was a special hole punch that was made to easily make this extra (square) hole in a floppy.
• Taping or otherwise covering the bottom right hole on a HD 3½-inch disk enables it to be 'downgraded' to DD format. This may be done for reasons such as compatibility issues with older computers, drives or devices that use DD floppies, like some electronic keyboard instruments and samplers [29] where a 'downgraded' disk can be useful, as factory-made DD disks have become hard to find after the mid-1990s. See the section "Compatibility" above.
o Note: By default, many older HD drives will recognize ED disks as DD ones, since they lack the HD-specific holes and the drives lack the sensors to detect the ED-specific hole. Most DD drives will also handle ED (and some even HD) disks as DD ones.[citation needed]
• Similarly, drilling an HD-like hole (under the ED one) into an ED (2880 kB) disk for 'downgrading' it to HD (1440 kB) format if there are many unusable ED disks due to the lack of a specific ED drive, which can now be used as normal HD disks.[citation needed]
• Even if such a format was hardly officially supported on any system, it is possible to "force" a 3½-inch floppy disk drive to be recognized by the system as a 5¼-inch 360 kB or 1200 kB one (on PCs and compatibles. This can be done by simply changing the CMOS BIOS settings) and thus format and read non-standard disk formats, such as a double sided 360 kB 3½-inch disk. Possible applications include data exchange with obsolete CP/M systems, for example with an Amstrad CPC.[citation needed]
5¼-inch floppy disk
The situation was even more complex with 5¼-inch diskettes. The head gap of an 80-track high-density (1.2 MB in the MFM format) drive is shorter than that of a 40-track double-density (360 kB) drive, but will format, read and write 40 track diskettes with apparent success provided the controller supports double stepping (or the manufacturer fitted a switch to do double stepping in hardware). A blank 40 track disk formatted and written on an 80 track drive can be taken to a 40 track drive without problems, similarly a disk formatted on a 40 track drive can be used on an 80 track drive. But a disk written on a 40 track drive and updated on an 80 track drive becomes permanently unreadable on any 360 kB drive, owing to the incompatibility of the track widths (special, very slow programs could have been used to overcome this problem). There are several other bad scenarios.
Prior to the problems with head and track size, there was a period when just trying to figure out which side of a "single sided" diskette was the right side was a problem. Both Radio Shack and Apple used 180 kB single-sided 5¼-inch disks, and both sold disks labeled "single sided" that were certified for use on only one side, even though they in fact were coated in magnetic material on both sides. The irony was that the disks would work on both Radio Shack and Apple machines, yet the Radio Shack TRS-80 Model I computers used one side and the Apple II machines used the other, regardless of whether there was software available which could make sense of the other format.
[pic]
A disk notcher used to convert single-sided 5.25-inch diskettes to double-sided.
For quite a while in the 1980s, users could purchase a special tool called a disk notcher which would allow them to cut a second write-unprotect notch in these diskettes and thus use them as "flippies" (either inserted as intended or upside down): both sides could now be written on and thereby the data storage capacity was doubled. Other users made do with a steady hand and a hole punch or scissors. For re-protecting a disk side, one would simply place a piece of opaque tape over the notch or hole in question. These "flippy disk procedures" were followed by owners of practically every home-computer with single sided disk drives. Proper disk labels became quite important for such users. Flippies were eventually adopted by some manufacturers, with a few programs being sold in this medium (they were also widely used for software distribution on systems that could be used with both 40 track and 80 track drives but lacked the software to read a 40 track disk in an 80 track drive). The practice eventually faded with the increased use of double-sided drives capable of accessing both sides of the disk without the need for flipping.
7. More on floppy disk formats
7. 1. Efficiency of disk space usage
In general, data is written to floppy disks in a series of sectors, angular blocks of the disk, and in tracks, concentric rings at a constant radius, e.g. the HD format of 3½-inch floppy disks uses 512 bytes per sector, 18 sectors per track, 80 tracks per side and two sides, for a total of 1,474,560 bytes per disk. (Some disk controllers can vary these parameters at the user's request, increasing the amount of storage on the disk, although these formats may not be able to be read on machines with other controllers; e.g. Microsoft applications were often distributed on Distribution Media Format (DMF) disks, a hack that allowed 1.68 MB (1680 kB) to be stored on a 3½-inch floppy by formatting it with 21 sectors instead of 18, while these disks were still properly recognized by a standard controller.) On the IBM PC and also on the MSX, Atari ST, Amstrad CPC, and most other microcomputer platforms, disks are written using a Constant Angular Velocity (CAV)—Constant Sector Capacity format.[citation needed] This means that the disk spins at a constant speed, and the sectors on the disk all hold the same amount of information on each track regardless of radial location.
However, this is not the most efficient way to use the disk surface, even with available drive electronics.[citation needed] Because the sectors have a constant angular size, the 512 bytes in each sector are packed into a smaller length near the disk's center than nearer the disk's edge. A better technique would be to increase the number of sectors/track toward the outer edge of the disk, from 18 to 30 for instance, thereby keeping constant the amount of physical disk space used for storing each 512 byte sector (see zone bit recording). Apple implemented this solution in the early Macintosh computers by spinning the disk slower when the head was at the edge while keeping the data rate the same, allowing them to store 400 kB per side, amounting to an extra 160 kB on a double-sided disk.[citation needed] This higher capacity came with a serious disadvantage, however: the format required a special drive mechanism and control circuitry not used by other manufacturers, meaning that Mac disks could not be read on any other computers. Apple eventually gave up on the format and used constant angular velocity with HD floppy disks on their later machines; these drives were still unique to Apple as they still supported the older variable-speed format.
7. 2. Commodore 64/128
Commodore started its tradition of special disk formats with the 5¼-inch disk drives accompanying its PET/CBM, VIC-20 and Commodore 64 home computers, the same as the 1540 and 1541 drives used with the later two machines. The standard Commodore Group Code Recording (GCR) scheme used in 1541 and compatibles employed four different data rates depending upon track position (see zone bit recording). Tracks 1 to 17 had 21 sectors, 18 to 24 had 19, 25 to 30 had 18, and 31 to 35 had 17, for a disk capacity of 170 kB (170.75 KB). Unique among personal computer architectures, the operating system on the computer itself was unaware of the details of the disk and filesystem; disk operations were handled by Commodore DOS instead, which was implemented with an extra MOS-6502 processor on the disk drive. Many programs such as GEOS removed Commodore's DOS completely, and replaced it with "fast loading" programs in the 1541 drive.
Eventually Commodore gave in to disk format standardization, and made its last 5¼-inch drives, the 1570 and 1571, compatible with Modified Frequency Modulation (MFM), to enable the Commodore 128 to work with CP/M disks from several vendors. Equipped with one of these drives, the C128 was able to access both C64 and CP/M disks, as it needed to, as well as MS-DOS disks (using third-party software), which was a crucial feature for some office work.
Commodore also offered its 8-bit machines a 3½-inch 800 kByte disk format with its 1581 disk drive, which used only MFM.
The GEOS operating system used a disk format that was largely identical to the Commodore DOS format with a few minor extensions; while generally compatible with standard Commodore disks, certain disk maintenance operations could corrupt the filesystem without proper supervision from the GEOS Kernel.
7. 3. Atari 8-bit line
The combination of DOS and hardware (810, 1050 and XF551 disk drives) for Atari 8-bit floppy usage allowed sectors numbered from 1 to 720. The DOS' 2.0 disk bitmap provides information on sector allocation, counts from 0 to 719. As a result, sector 720 could not be written to by the DOS. Some companies used a copy protection scheme where "hidden" data was put in sector 720 that could not be copied through the DOS copy option. Another more-common early copy-protected scheme simply did not record important sectors as "used" in the FAT, so the DOS Utility Package (DUP) did not duplicate them. All of these early techniques were thwarted by the first program that simply duplicated all 720 sectors.
Later DOS versions (3.0 and later 2.5) and DOS systems by third parties (i.e. OSS) accepted (and formatted) disks with up to 960 and 1020 sectors, resulting in 127KB storage capacity per disk side on drives equipped with double-density heads (i.e. not the Atari 810) vs. previous 90KB. That unusual 127K format allowed sectors 1-720 to still be read on a single-density 810 disk drive, and was introduced by Atari with the 1050 drive with the introduction of DOS 3.0 in 1983.
A true 180K double-density Atari floppy format used 128 byte sectors for sectors 1-3, then 256 byte sectors for 4-720. The first three sectors typically contain boot code as used by the onboard ROM OS; it's up to the resulting boot program (such as SpartaDOS) to recognize the density of the formatted disk structure. While this 180K format was developed by Atari for their DOS 2.0D and their (canceled) Atari 815 Floppy Drive, that double-density DOS was never widely released and the format was generally used by third-party DOS products. Under the Atari DOS scheme, sector 360 was the FAT sector map, and sectors 361-367 contained the file listing. The Atari-brand DOS versions and compatible used three bytes per sector for housekeeping and to link-list to the next sector.
Third-party DOS systems added features such as double-sided drives, subdirectories, and drive types such as 1.2 MByte and 8". Well-known 3rd party Atari DOS products included SmartDOS (distributed with the Rana disk drive), TopDos, MyDos and SpartaDOS.
7. 4. Commodore Amiga
[pic]
The pictured chip, codenamed Paula, controlled floppy access on all revisions of the Commodore Amiga as one of its many functions.
The Commodore Amiga computers used an 880 kByte format (11×512-byte sectors per track) on a 3½-inch floppy. Because the entire track is written at once, inter-sector gaps could be eliminated, saving space. The Amiga floppy controller was basic but much more flexible than the one on the PC: it was free of arbitrary format restrictions, encoding such as MFM and GCR could be done in software, and developers were able to create their own proprietary disc formats. Because of this, foreign formats such as the IBM PC-compatible could be handled with ease (by use of CrossDOS, which was included with later versions of AmigaOS). With the correct filesystem driver, an Amiga could theoretically read any arbitrary format on the 3½-inch floppy, including those recorded at a slightly different rotation rate. On the PC, however, there is no way to read an Amiga disk without special hardware, such as a CatWeasel, or a second floppy drive, [30] which is also a crucial reason for an emulator being technically unable to access real Amiga disks inserted in a standard PC floppy disk drive.
Commodore never upgraded the Amiga chip set to support high-density floppies, but sold a custom drive (made by Chinon) that spun at half speed (150 RPM) when a high-density floppy was inserted, enabling the existing floppy controller to be used. This drive was introduced with the launch of the Amiga 3000, although the later Amiga 1200 was only fitted with the standard DD drive. The Amiga HD disks could handle 1760 kByte, but using special software programs it could hold even more data. A company named Kolff Computer Supplies also made an external HD floppy drive (KCS Dual HD Drive) available which could handle HD format diskettes on all Amiga computer systems. [31]
Because of storage reasons, the use of emulators and preserving data, many disks were packed into disk-images. Currently popular formats are .ADF (Amiga Disk File), .DMS (DiskMasher) and .IPF (Interchangeable Preservation Format) files. The DiskMasher format is copyright-protected and has problems storing particular sequences of bits due to bugs in the compression algorithm, but was widely used in the pirate and demo scenes. ADF has been around for almost as long as the Amiga itself though it was not initially called by that name. Only with the advent of the Internet and Amiga emulators has it become a popular way of distributing disk images. The proprietary IPF files were created to allow preservation of commercial games which have copy protection, which is something that ADF and DMS unfortunately cannot do.
7. 5. Acorn Electron, BBC Micro, and Acorn Archimedes
The British company Acorn used non-standard disk formats in their 8-bit BBC Micro and Acorn Electron, and their successor the 32-bit Acorn Archimedes. Acorn however used standard disk controllers — initially FM, though they quickly transitioned to MFM. The original disk implementation for the BBC Micro stored 100 KB (40 track) or 200 KB (80 track) per side on 5¼-inch disks in a custom format using the Disc Filing System (DFS).
Because of the incompatibility between 40 and 80 track drives, much software was distributed on combined 40/80 track discs. These worked by writing the same data in pairs of consecutive tracks in 80 track format, and including a small loader program on track 1 (which is in the same physical position in either format). The loader program detected which type of drive was in use, and loaded the main software program straight from disc bypassing the DFS, double-stepping for 80 track drives and single-stepping for 40 track. This effectively achieved downgraded capacity to 100 KB from either disk format, but enabled distributed software to be effectively compatible with either drive.
For their Electron floppy disk add-on added, Acorn picked 3½-inch disks and developed the Advanced Disc Filing System (ADFS). It used double-density recording and added the ability to treat both sides of the disk as a single drive. This offered three formats: S (small) — 160 KB, 40-track single-sided; M (medium) — 320 KB, 80-track single-sided; and L (large) — 640 KB, 80-track double-sided. ADFS provided hierarchical directory structure, rather than the flat model of DFS. ADFS also stored some metadata about each file, notably a load address, an execution address, owner and public privileges, and a "lock" bit. Even on the eight-bit machines, load addresses were stored in 32-bit format, since those machines supported 16 and 32-bit coprocessors.
The ADFS format was later adopted into the BBC line upon release of the BBC Master. The BBC Master Compact marked the move to 3½-inch disks, using the same ADFS formats.
The Acorn Archimedes added D format, which increased the number of objects per directory from 44 to 77 and increased the storage space to 800 KB. The extra space was obtained by using 1024 byte sectors instead of the usual 512 bytes, thus reducing the space needed for inter-sector gaps. As a further enhancement, successive tracks were offset by a sector, giving time for the head to advance to the next track without missing the first sector, thus increasing bulk throughput. The Archimedes used special values in the ADFS load/execute address metadata to store a 12-bit filetype field and a 40-bit timestamp.
RISC OS 2 introduced E format, which retained the same physical layout as D format, but supported file fragmentation and auto-compaction. Post-1991 machines including the A5000 and Risc PC added support for high-density disks with F format, storing 1600 KB. However, the PC combo IO chips used were unable to format disks with sector skew, losing some performance. ADFS and the PC controllers also support extended-density disks as G format, storing 3200 KB, but ED drives were never fitted to production machines.
With RISC OS 3, the Archimedes could also read and write disk formats from other machines, for example the Atari ST and the IBM PC. With third party software it could even read the BBC Micro's original single density 5¼-inch DFS disks. The Amiga's disks could not be read as they used unusual sector gap markers.
The Acorn filesystem design was interesting because all ADFS-based storage devices connected to a module called FileCore which provided almost all the features required to implement an ADFS-compatible filesystem. Because of this modular design, it was easy in RISC OS 3 to add support for so-called image filing systems. These were used to implement completely transparent support for IBM PC format floppy disks, including the slightly different Atari ST format. Computer Concepts released a package that implemented an image filing system to allow access to high density Macintosh format disks.
IBM DemiDiskettes
[pic]
IBM DemiDiskette media and drive
In the early 80s, IBM Rochester developed a 4-inch floppy diskette, the DemiDiskette. This program was driven by aggressive cost goals, but missed the pulse of the industry. The prospective users, both inside and outside IBM, preferred standardization to what by release time were small cost reductions, and were unwilling to retool packaging, interface chips and applications for a proprietary design. The product never appeared in the light of day, and IBM wrote off several hundred million dollars of development and manufacturing facility. IBM obtained patent number 4482929 on the media and the drive for the DemiDiskette. At trade shows, the drive and media were labeled "Brown" and "Tabor".[citation needed]
7. 7. Auto-loaders
IBM developed, and several companies copied, an autoloader mechanism that could load a stack of floppies one at a time into a drive unit. These were very bulky systems, and suffered from media hangups and chew-ups more than standard drives,[citation needed] but they were a partial answer to replication and large removable storage needs. The smaller 5¼- and 3½-inch floppy made this a much easier technology to perfect.
7. 8. Floppy mass storage
A number of companies, including IBM and Burroughs, experimented with using large numbers of unenclosed disks to create massive amounts of storage. The Burroughs system used a stack of 256 12-inch disks, spinning at a high speed. The disk to be accessed was selected by using air jets to part the stack, and then a pair of heads flew over the surface as in any standard hard disk drive. This approach in some ways anticipated the Bernoulli disk technology implemented in the Iomega Bernoulli Box, but head crashes or air failures were spectacularly messy. The program did not reach production.
7. 9. 2-inch floppy disks
See also: Video Floppy
[pic]
2-inch Video Floppy Disk from Canon.
A small floppy disk was also used in the late 1980s to store video information for still video cameras such as the Sony Mavica (not to be confused with later Digital Mavica models) and the Ion and Xapshot cameras from Canon. It was officially referred to as a Video Floppy (or VF for short).
VF was not a digital data format; each track on the disk stored one video field in the analog interlaced composite video format in either the North American NTSC or European PAL standard. This yielded a capacity of 25 images per disk in frame mode and 50 in field mode.
The same media were used digitally formatted - 720 kB, 245TPI, 80 tracks/side, double-sided, double-density - in the Zenith Minisport laptop computer circa 1989. Although the media exhibited nearly identical performance to the 3½-inch disks of the time, they were not successful. This was due in part to the scarcity of other devices using this drive making it impractical for software transfer, and high media cost which was much more than 3½-inch and 5¼-inch disks of the time.
7. 10. Ultimate capacity and speed
Floppy disk drive and floppy media manufacturers specify an unformatted capacity, which is, for example, 2.0 MB for a standard 3½-inch HD floppy. It is implied that this data capacity should not be exceeded since exceeding such limitations will most likely degrade the design margins of the floppy system and could result in performance problems such as inability to interchange or even loss of data. However the Distribution Media Format was later introduced permitting 1680 KB to fit onto an otherwise standard 3½-inch disk. Utilities then appeared allowing disks to be formatted to this capacity.
The nominal formatted capacity printed on labels is "1.44 MB" which uses an incorrect definition of the megabyte that combines decimal (base 10) with binary (base 2) to yield 1.44×1000×1024 bytes (approximately 1.47 million bytes). This usage of the "Mega-" prefix is not compatible with the International System of Units prefixes. Using SI-compliant definitions, the capacity of a 3½-inch HD floppy is properly written as 1.47 MB (base 10) or 1.41 MiB (base 2).
User available data capacity is a function of the particular disk format used which in turn is determined by the FDD controller manufacturer and the settings applied to its controller. The differences between formats can result in user data capacities ranging from approximately 1300 KB up to 1760 KB (1.80 MB) on a "standard" 3½-inch High Density floppy (and even up to near 2 MB with utilities like 2MGUI). The highest capacity techniques require much tighter matching of drive head geometry between drives; this is not always possible and cannot be relied upon. The LS-240 drive supports a (rarely used) 32 MB capacity on standard 3½-inch HD floppies[citation needed]—it is, however, a write-once technique, and cannot be used in a read/write/read mode. All the data must be read off, changed as needed and rewritten to the disk. The format also requires an LS-240 drive to read.
Double-sided Extended-density (DSED) 3½″ floppy disks, introduced by Toshiba in 1987 and adopted by IBM on the PS/2 in 1994, [17] operate at twice the data rate and have twice the capacity of DSHD 3½″ FDDs. [32] The only serious attempt to speed up a 3½” floppy drive beyond 2x was the X10 accelerated floppy drive. It used a combination of RAM and 4x spindle speed to read a floppy in less than six seconds versus the more than one minute of a conventional drive.
3½-inch HD floppy drives typically have a maximum transfer rate of 1000 kilobits/second (minus overhead such as error correction and file handling). (For comparison, a 1x CD transfers at 1200 kilobits per second (maximum), and a 1x DVD transfers at approximately 11,000 kilobits per second.) While the floppy's data rate cannot be easily changed, overall performance can be improved by optimizing drive access times, shortening some BIOS introduced delays (especially on the IBM PC and compatible platforms), and by changing the sector:shift parameter of a disk, which is, roughly, the numbers of sectors that are skipped by the drive's head when moving to the next track. Because of overhead and these additional delays, the average sequential read speed is rather 30-70 KB/s than 125 KB/s.
This happens because sectors are not typically written exactly in a sequential manner but are scattered around the disk, which introduces yet another delay. Older machines and controllers may take advantage of these delays to cope with the data flow from the disk without having to actually stop.
8. Usability
One of the chief usability problems of the floppy disk is its vulnerability. Even inside a closed plastic housing, the disk medium is still highly sensitive to dust, condensation and temperature extremes. As with any magnetic storage, it is also vulnerable to magnetic fields. Blank disks have usually been distributed with an extensive set of warnings, cautioning the user not to expose it to conditions which can endanger it. It should be noted that the disk must not be roughly treated, or removed from the drive if the access light is switched on and the magnetic media is still spinning, since doing so is likely to cause damage to the disk, drive head and/or render the data on it inaccessible.
Users damaging floppy disks (or their contents) were once a staple of "stupid user" folklore among computer technicians. These stories poked fun at users who stapled floppies to papers, made faxes or photocopies of them when asked to "copy a disk," or stored floppies by holding them with a magnet to a file cabinet. The flexible 5¼-inch disk could also (apocryphally) be abused by rolling it into a typewriter to type a label, or by removing the disk medium from the plastic enclosure, the same way a record is removed from its slipsleeve. Also, these same users were, conversely, often the victims of technicians' hoaxes. Stories of them being carried on Subway/Underground systems wrapped in tin-foil to protect them from the magnetic fields of the electric power supply were common (for an explanation of why this is plausible, see Faraday cage).
On the other hand, the 3½-inch floppy has also been lauded for its mechanical usability by HCI expert Donald Norman:
A simple example of a good design is the 3½-inch magnetic diskette for computers, a small circle of "floppy" magnetic material encased in hard plastic. Earlier types of floppy disks did not have this plastic case, which protects the magnetic material from abuse and damage. A sliding metal cover protects the delicate magnetic surface when the diskette is not in use and automatically opens when the diskette is inserted into the computer. The diskette has a square shape: there are apparently eight possible ways to insert it into the machine, only one of which is correct. What happens if I do it wrong? I try inserting the disk sideways. Ah, the designer thought of that. A little study shows that the case really isn't square: it's rectangular, so you can't insert a longer side. I try backward. The diskette goes in only part of the way. Small protrusions, indentations, and cutouts, prevent the diskette from being inserted backward or upside down: of the eight ways one might try to insert the diskette, only one is correct, and only that one will fit. An excellent design. [33]
If a floppy drive is used very infrequently, dust may accumulate on the drive's read/write head and cause damage to floppy disks. This rarely happens on floppy drives that are used frequently[dubious - discuss]. In order to overcome this, the user can first put in the drive a cleaning disk or an already useless disk and run the drive. Thus dust would be cleaned off from the read/write head.
Magnetic tape data storage (1/2)
[pic]
Magnetic tape has been used for data storage for over 50 years. In this time, many advances in tape formulation, packaging, and data density have been made. Modern magnetic tape is most commonly packaged in cartridges and cassettes. The device that performs actual writing or reading of data is a tape drive. Autoloaders and tape libraries are frequently used to automate cartridge handling.
When storing large amounts of data, tape can be substantially less expensive than disk or other data storage options. Tape storage has always been used with large computer systems. Modern usage is primarily as a high capacity medium for backups and archives. As of 2008, the highest capacity tape cartridges (Sun StorageTek T10000B, IBM TS1130) can store 1 TB of uncompressed data.
Contents:
1. Open reels
2. Cartridges and cassettes
3. Technical details
4. Viability
5. Chronological list of tape formats
6. See also
7. References
1. Open reels
[pic]
10.5 inch reel of 9 track tape
Initially, magnetic tape for data storage was wound on large (10.5 in/26.67 cm) reels. This defacto standard for large computer systems persisted through the late 1980s. Tape cartridges and cassettes were available as early as the mid 1970s and were frequently used with small computer systems. With the introduction of the IBM 3480 cartridge in 1984, large computer systems started to move away from open reel tapes and towards cartridges.
1. 1. UNIVAC
Magnetic tape was first used to record computer data in 1951 on the Eckert-Mauchly UNIVAC I. The UNISERVO drive recording medium was a thin metal strip of ½″ wide(12.7 mm) nickel-plated phosphor bronze. Recording density was 128 characters per inch (198 micrometre/character) on eight tracks at a linear speed of 100 in/s (2.54 m/s), yielding a data rate of 12,800 characters per second. Of the eight tracks, six were data, one was a parity track, and one was a clock, or timing track. Making allowance for the empty space between tape blocks, the actual transfer rate was around 7,200 characters per second. A small reel of mylar tape provided separation from the metal tape and the read/write head.
1. 2. IBM formats
IBM computers from the 1950s used ferrous-oxide coated tape similar to that used in audio recording. IBM's technology soon became the de facto industry standard. Magnetic tape dimensions were 0.5" (12.7 mm) wide and wound on removable reels of up to 10.5 inches (267 mm) in diameter. Different tape lengths were available with 1200', 2400' on mil and one half thickness being somewhat standard. Later during the '80s, longer tape lengths such as 3600' became available, but only with a much thinner PET film. Most tape drives could support a maximum reel size of 10.5"
Early IBM tape drives, such as the IBM 727 and IBM 729, were mechanically sophisticated floor-standing drives that used vacuum columns to buffer long u-shaped loops of tape. Between active control of powerful reel motors and vacuum control of these u-shaped tape loops, extremely rapid start and stop of the tape at the tape-to-head interface could be achieved. (1.5ms from stopped tape to full speed of up to 112.5 IPS) When active, the two tape reels thus fed tape into or pulled tape out of the vacuum columns, intermittently spinning in rapid, unsynchronized bursts resulting in visually-striking action. Stock shots of such vacuum-column tape drives in motion were widely used to represent "the computer" in movies and television.
Early half-inch tape had 7 parallel tracks of data along the length of the tape allowing, six-bit characters plus one bit of parity written across the tape. This was known as 7-track tape. With the introduction of the IBM System 360 mainframe, 9 track tapes were developed to support the new 8-bit characters that it used. Effective recording density increased over time. Common 7-track densities started at 200, then 556, and finally 800 cpi and 9-track tapes had densities of 800, 1600, and 6250 cpi. This translates into about 5 MB to 140 MB per standard length (2400 ft) reel of tape. At least partly due to the success of the S/360, 9-track tapes were widely used throughout the industry through the 1980s. End of file was designated by a tape mark and end of tape by two tape marks.
1. 3. DEC format
LINCtape, and its derivative, DECtape, were variations on this "round tape." They were essentially a personal storage medium. The tape was ¾ inch wide and featured a fixed formatting track which, unlike standard tape, made it feasible to read and rewrite blocks repeatedly in place. LINCtapes and DECtapes had similar capacity and data transfer rate to the diskettes that displaced them, but their "seek times" were on the order of thirty seconds to a minute.
2. Cartridges and cassettes
[pic]
Quarter-Inch cartridges.
In the context of magnetic tape, the term cassette usually refers to an enclosure that holds two reels with a single span of magnetic tape. The term cartridge is more generic, but frequently means a single reel of tape in a plastic enclosure.
The type of packaging is a large determinant of the load and unload times as well as the length of tape that can be held. A tape drive that uses a single reel cartridge has a takeup reel in the drive while cassettes have the take up reel in the cassette. A tape drive (or "transport" or "deck") uses precisely-controlled motors to wind the tape from one reel to the other, passing a read/write head as it does.
A different type of tape cartridge has a continuous loop of tape wound on a special reel that allows tape to be withdrawn from the center of the reel and then wrapped up around the edge. This type is similar to a cassette in that there is no take-up reel inside the tape drive.
In the 1970s and 1980s, audio Compact Cassettes were frequently used as an inexpensive data storage system for home computers. Compact cassettes were logically, as well as physically, sequential; they had to be rewound and read from the start to load data. Early cartridges were available before personal computers had affordable disk drives, and could be used as random access devices, automatically winding and positioning the tape, albeit with access times of many seconds.
Most modern magnetic tape systems use reels that are fixed inside a cartridge to protect the tape and facilitate handling. Modern cartridge formats include DAT/DDS, DLT and LTO with capacities in the tens to hundreds of gigabytes.
3. Technical details
3. 1. Tape width
Medium width is the primary classification criterion for tape technologies. Half inch has historically been the most common width of tape for high capacity data storage. Many other sizes exist and most were developed to either have smaller packaging or higher capacity.
3. 2. Recording method
[pic]
Linear
Recording method is also an important way to classify tape technologies, generally falling into two categories:
3. 2. 1. Linear
The linear method arranges data in long parallel tracks that span the length of the tape. Multiple tape heads simultaneously write parallel tape tracks on a single medium. This method was used in early tape drives. It is the simplest recording method, but has the lowest data density.
[pic]
Linear serpentine
A variation on linear technology is linear serpentine recording, which uses more tracks than tape heads. Each head still writes one track at a time. After making a pass over the whole length of the tape, all heads shift slightly and make another pass in the reverse direction, writing another set of tracks. This procedure is repeated until all tracks have been read or written. By using the linear serpentine method, the tape medium can have many more tracks than read/write heads. Compared to simple linear recording, using the same tape length and the same number of heads, the data storage capacity is substantially higher.
3. 2. 2. Scanning
[pic]
Helical
Scanning recording methods write short dense tracks across the width of the tape medium, not along the length. Tape heads are placed on a drum or disk which rapidly rotates while the relatively slowly moving tape passes it.
An early method used to get a higher data rate than the prevailing linear method was transverse scan. In this method a spinning disk, with the tape heads embedded in the outer edge, is placed perpendicular to the path of the tape. This method is used in Ampex's DCRsi instrumentation data recorders and the old 2 inch Quadruplex videotape system. Another early method was arcuate scan. In this method, the heads are on the face of a spinning disk which is laid flat against the tape. The path of the tape heads makes an arc.
Helical scan recording writes short dense tracks in diagonal manner. This recording method is used by virtually all videotape systems and several data tape formats.
3. 3. Block layout
In a typical format, data is written to tape in blocks with inter-block gaps between them, and each block is written in a single operation with the tape running continuously during the write. However, since the rate at which data is written or read to the tape drive is not deterministic, a tape drive usually has to cope with a difference between the rate at which data goes on and off the tape and the rate at which data is supplied or demanded by its host.
Various methods have been used alone and in combination to cope with this difference. The tape drive can be stopped, backed up, and restarted (known as shoe-shining, because of increased wear of both medium and head). A large memory buffer can be used to queue the data. The host can assist this process by choosing appropriate block sizes to send to the tape drive. There is a complex tradeoff between block size, the size of the data buffer in the record/playback deck, the percentage of tape lost on inter-block gaps, and read/write throughput.
Finally modern tape drives offer speed matching feature, where drive can dynamically decrease physical tape speed as much as 50% to avoid shoe-shining.
3. 4. Sequential access to data
From user perspective the primary difference between tape data storage and disk data storage is that tape is a sequential access medium while disk is a random access medium. Most tape systems use a very trivial filesystem in which files are addressed by number not by filename. Metadata such as file name or modification time is typically not stored at all.
3. 5. Random access to data
Over time some tools (i.e. tar) were introduced to enable storing metadata by introducing richer formats of packing multiple files in a single large 'tape file'.
With the introduction of LTFS a "Long Term Files System", tape is able to perform a random access as well as it allows a file level access to data. This enables to use a tape drives like an USB sticks or an external HDDs. Single files or whole directories can be cut, copied and pasted.
3. 6. Access time
Tape has quite a long latency for random accesses since the deck must wind an average of one-third the tape length to move from one arbitrary data block to another. Most tape systems attempt to alleviate the intrinsic long latency, either using indexing, where a separate lookup table (tape directory) is maintained which gives the physical tape location for a given data block number (a must for serpentine drives), by marking blocks with a tape mark that can be detected while winding the tape at high speed or by the usage of LTFS.
3. 7. Data compression
Most tape drives now include some kind of data compression. There are several algorithms which provide similar results: LZ (most), IDRC (Exabyte), ALDC (IBM, QIC) and DLZ1 (DLT). Embedded in tape drive hardware, these compress a relatively small buffer of data at a time, so cannot achieve extremely high compression even of highly redundant data. A ratio of 2:1 is typical, with some vendors claiming 2.6:1 or 3:1. The ratio actually obtained with real data is often less than the stated figure; the compression ratio cannot be relied upon when specifying the capacity of equipment, e.g., a drive claiming a compressed capacity of 500GB may not be adequate to back up 500GB of real data. Software compression can achieve much better results with sparse data, but uses the host computer's processor, and can slow the backup if it is unable to compress as fast as the data is written.
Some enterprise tape drives can encrypt data (this must be done after compression, as encrypted data cannot be compressed effectively). Symmetric streaming encryption algorithms are also implemented to provide high performance.
The compression algorithms used in low-end products are not the most effective known today, and better results can usually be obtained by turning off hardware compression, using software compression (and encryption if desired) instead.
4. Viability
Tape has historically offered enough advantage in bit density and lower cost-per-bit to justify its choice over disk storage for backup. However, the days of tape backup appear to be numbered.[says who?] As of February 2010, the price of the highest-capacity tape drive (Oracle's Sun StorageTek T10000B) was $37,000 for 1TB of native capacity. This compares with around $150 for a 2TB SATA disk drive, with similar write speeds but the advantages of re-writeability and minuscule access time. Many administrators[which?] will ask themselves why they should archive racks of tapes instead of racks of hard disks. The likelihood of the technology (tape or disk) being out of date before one has spent (on disks) the cost of the tape drive must also be considered.
Nonetheless, tape still has some advantages, such as robustness and package size; although the former will wane with the adoption of caddies. The disadvantage of a plethora of proprietary tape formats, changing frequently over time, may be the final nail in the coffin of tape backup. Dell made a spirited defence of tape backup in a now-ageing July 2008 ComputerWorld Technology Briefing. [1] However, the rapid improvement in disk storage density and price, coupled with arguably less-vigorous innovation in tape storage, looks sure to continue tape storage's demise.[says who?]
Chronological list of tape formats
[pic]
IBM 729V
• 1951 - UNISERVO
• 1952 - IBM 7 track
• 1958 - TX-2 Tape System
• 1962 - LINCtape
• 1963 - DECtape
• 1964 - 9 Track
• 1964 - Magnetic tape selectric typewriter
• 1972 - QIC
• 1975 - KC Standard, Compact Cassette
• 1976 - DC100
• 1977 - Datassette
• 1979 - DECtapeII
• 1979 - Exatron Stringy Floppy
• 1983 - ZX Microdrive
• 1984 - Rotronics Wafadrive
• 1984 - IBM 3480
• 1984 - DLT
• 1986 - SLR
• 1987 - Data8
• 1989 - DDS/DAT
• 1992 - Ampex DST
• 1994 - Mammoth
• 1995 - IBM 3590
• 1995 - Redwood SD-3
• 1995 - Travan
• 1996 - AIT
• 1997 - IBM 3570 MP
• 1998 - T9840
• 1999 - VXA
• 2000 - T9940
• 2000 - LTO Ultrium
• 2003 - SAIT
• 2006 - T10000
• 2007 - IBM 3592
• 2008 - IBM TS1130
Tape drive
[pic]
A tape drive is a data storage device that reads and writes data on a magnetic tape. It is typically used for off-line, archival data storage. Tape media generally has a favorable unit cost and long archival stability.
A tape drive provides sequential access storage, unlike a disk drive, which provides random access storage. A disk drive can move its read/write head(s) to any random part of the disk in a very short amount of time, but a tape drive must spend a considerable amount of time winding tape between reels to read any one particular piece of data. As a result, tape drives have very slow average seek times. Despite the slow seek time, tape drives can stream data to and from tape very quickly. For example, modern LTO drives can reach continuous data transfer rates of up to 80 MB/s, which is as fast as most 10,000 RPM hard disks.
[pic]
DDS tape drive. Above, from left to right: DDS-4 tape (20 GB), 112m Data8 tape (2.5 GB), QIC DC-6250 tape (250 MB), and a 3.5" floppy disk (1.44 MB)
Contents:
1. Design
2. Media
3. History
4. Notes
5. References
1. Design
[pic]
An external QIC tape drive.
Tape drives can range in capacity from a few megabytes to hundreds of gigabytes of uncompressed data. In marketing materials, tape storage is usually referred to with the assumption of 2:1 compression ratio, so a tape drive might be known as 80/160, meaning that the true storage capacity is 80 whilst the compressed storage capacity can be approximately 160 in many situations. IBM and Sony have also used higher compression ratios in their marketing materials. The real-world, observed compression ratio always depends on what type of data is being compressed. The true storage capacity is also known as the native capacity or the raw capacity.
Tape drives can be connected to a computer with SCSI (most common), Fibre Channel, SATA, USB, FireWire, FICON, or other [1] interfaces. Tape drives can be found inside autoloaders and tape libraries which assist in loading, unloading and storing multiple tapes to further increase archive capacity.
Some older tape drives were designed as inexpensive alternatives to disk drives. Examples include DECtape, the ZX Microdrive and Rotronics Wafadrive. This is generally not feasible with modern tape drives that use advanced techniques like multilevel forward error correction, shingling, and serpentine layout for writing data to tape.
1. 1. Problems
An effect referred to as shoe-shining may occur during read/write operations if the data transfer rate falls below the minimum threshold at which the tape drive heads were designed to transfer data to or from a continuously running tape. When the transfer rate becomes too low and streaming is no longer possible, the drive must decelerate and stop the tape, rewind it a short distance, restart it, position back to the point at which streaming stopped and then resume the operation. The resulting back-and-forth tape motion resembles that of shining shoes with a cloth.
In early tape drives, the situation of non-continuous data transfers was normal and unavoidable - computers with weak processors and low memory were rarely able to provide a constant stream. So, tape drives were typically designed for so called start-stop operation. Early drives used very large spools, which necessarily had high inertia and did not start and stop moving easily. To provide high start, stop, and seeking performance, several feet of loose tape was played out and pulled by a suction fan down into two deep open channels on either side of the tape head and capstans. The long thin loops of tape hanging in these vacuum columns had far less inertia than the two reels and could be rapidly started, stopped and repositioned. The large reels would occasionally move to take up written tape and play out more blank tape into the vacuum columns.
Later, most tape drive designs of the 1980s introduced the use of an internal data buffer to somewhat reduce start-stop situations. These drives are colloquially referred to as streamers. The tape was stopped only when the buffer contained no data to be written, or when it was full of data during reading. As the tape speed increased, the start-stop operation was no longer possible, and the drives started to suffer from shoe-shining (sequence of stop, rewind, start).
Most recently, drives no longer operate at single fixed linear speed, but have a few speed levels. Internally, they implement algorithms that dynamically match the tape speed level to computer's data rate. Example speed levels could be 50 percent, 75 percent and 100 percent of full speed. Still, a computer that streams data constantly below the lowest speed level (e.g. at 49 percent) will undoubtedly cause shoe-shining.
When shoe-shining occurs, it significantly affects the attainable data rate, as well as drive and tape life.
2. Media
Magnetic tape is commonly housed in a casing such as plastic known as a cassette or a cartridge—for example, the 4-track cartridge and the compact cassette. The cassette contains magnetic tape to provide different audio content using the same player. The plastic outer shell permits ease of handling of the fragile tape, making it far more convenient and robust than having loose or exposed tape.
3. History
|Year |Manufacturer |Model |Advancements |
|1951 |Remington Rand |UNISERVO |First computer tape drive |
|1952 |IBM |726 |Use of plastic tape (cellulose acetate); 7-track tape recording 6-bit |
| | | |bytes |
|1958 |IBM |729 |Separate read/write heads providing transparent read-after-write |
| | | |verification. [2] In January 2009, The Computer History Museum in |
| | | |Mountain View, California has working IBM 729 tape drives attached to |
| | | |their working IBM 1401 system. [3] |
|1964 |IBM |2400 |9-track tape that could store every 8-bit byte plus a parity bit. |
|1970's |IBM |3400 |Auto-loading tape reels and drives, avoiding manual tape threading; Group|
| | | |code recording for error recovery at 6250 bit-per-inch density |
|1972 |3M |QIC-11 |Tape cassette (with two reels) |
|1974 |IBM |3850 |Tape cartridge (with single reel) |
| | | |First tape library with robotic access [4] |
|1980's |Commodore International |Commodore Datasette |Use of standard audio cassettes |
|1980 |Cipher |(F880?) |RAM buffer to mask start-stop delays [5] [6] |
|1984 |IBM |3480 |Internal takeup reel with automatic tape takeup mechanism. |
| | | |Thin-film magnetoresistive (MR) head. [7] |
|1984 |DEC |TK50 |Linear serpentine recording [8] |
|1986 |IBM |3480 |Hardware data compression (IDRC algorithm) [9] |
|1987 |Exabyte/Sony |EXB-8200 |First helical digital tape drive. |
| | | |Elimination of the capstan and pinch-roller system. |
|1993 |DEC |Tx87 |Tape directory (database with first tapemark nr on each serpentine pass).|
| | | |[10] |
|1995 |IBM |3570 |Head assembly that follows pre-recorded tape servo tracks (Time Based |
| | | |Servoing or TBS) [11] |
| | | |Tape on unload rewound to the midpoint — halving access time (requires |
| | | |two-reel cassette, resulting in lesser capacity) [12] |
|1996 |HP |DDS3 |Partial Response Maximum Likelihood (PRML) reading method — no fixed |
| | | |thresholds [13] |
|1997 |IBM |VTS |Virtual tape — disk cache that emulates tape drive [14] |
|1999 |Exabyte |Mammoth-2 |The small cloth-covered wheel cleaning tape heads. Inactive burnishing |
| | | |heads to prep the tape and deflect any debris or excess lubricant. |
| | | |Section of cleaning material at the beginning of each data tape. |
|2000 |Quantum |Super DLT |optical servo allows more precise positioning of the heads relative to |
| | | |the tape. [15] |
|2003 |IBM |3592 |Virtual backhitch |
|2003 |Sony |SAIT-1 |Single-reel cartridge for helical recording |
|2006 |StorageTek |T10000 |Multiple head assemblies and servos per drive [16] |
|2006 |IBM |3592 |Encryption capability integrated into the drive |
|2008 |IBM |TS1130 |GMR heads in a linear tape drive |
|2010 |IBM |TS2250 LTO Gen5 |Tape file system LTFS which allows file level access to data and |
| | | |drag-and-drop to and from tape |
| |
|Magnetic tape data storage formats |
| | |
|Linear | |
| |Three quarter inch (19 mm) |
| |TX-2 Tape System (1958) · LINCtape (1962) · DECtape (1963) |
| | |
| | |
| | |
| | |
| |Half inch (12.65 mm) |
| |UNISERVO (1951) · IBM 7 track (1952) · 9 track (1964) · IBM 3480 (1984) · DLT (1984) · IBM 3590 (1995) · T9840 (1998) · |
| |T9940 (2000) · LTO Ultrium (2000) · IBM 3592 (2003) · T10000 (2006) |
| | |
| | |
| | |
| | |
| |Eight millimeter (8 mm) |
| |Travan (1995) · IBM 3570 MP (1997) · ADR (1999) |
| | |
| | |
| | |
| | |
| |Quarter inch (6.35 mm) |
| |QIC (1972) · SLR (1986) · Ditto (1992) |
| | |
| | |
| | |
| | |
| |Eighth inch (3.81 mm) |
| |KC Standard, Compact Cassette (1975) · HP DC100 (1976) · Commodore Datasette (1977) · DECtapeII (1979) |
| | |
| | |
| | |
| | |
| |Stringy (1.58-1.9 mm) |
| |Exatron Stringy Floppy (1979) · ZX Microdrive (1983) · Rotronics Wafadrive (1984) |
| | |
| | |
|Helical | |
| |Three quarter inch (19 mm) |
| |Sony DIR (19xx) · Ampex DST (1992) |
| | |
| | |
| | |
| | |
| |Half inch (12.65 mm) |
| |Redwood SD-3 (1995) · DTF (19xx) · SAIT (2003) |
| | |
| | |
| | |
| | |
| |Eight millimeter (8 mm) |
| |Data8 (1987) · Mammoth (1994) · AIT (1996) · VXA (1999) |
| | |
| | |
| | |
| | |
| |Eighth inch (3.81 mm) |
| |DDS/DAT (1989) |
| | |
Paper data storage
Paper data storage refers to the storage of data on paper. This includes writing, illustrating, and the use of data that can be interpreted by a machine or is the result of the functioning of a machine. A defining feature of paper data storage is the ability of humans to produce it with only simple tools and interpret it visually.
Though this is now mostly obsolete, paper was once also an important form of computer data storage.
Contents:
1. Machine use
2. See also
3. References
1. Machine use
The earliest use of paper to store instructions for a machine was the work of Basile Bouchon who, in 1725, used punched paper rolls to control textile looms. This technology was later developed into the wildly successful Jacquard loom. The 19th century saw several other uses of paper for data storage. In 1846, telegrams could be prerecorded on punched tape and rapidly transmitted using Alexander Bain's automatic telegraph. Several inventors took the concept of a mechanical organ and used paper to represent the music.
In the late 1880s Herman Hollerith invented the recording of data on a medium that could then be read by a machine. Prior uses of machine readable media had been for control (Automatons, Piano rolls, looms, ...), not data. "After some initial trials with paper tape, he settled on punched cards..." [1] Hollerith's method was used in the 1890 census and the completed results were "... finished months ahead of schedule and far under budget". [2] Hollerith's company eventually became the core of IBM.
Other technologies were also developed that allowed machines to work with marks on paper instead of punched holes. This technology was widely used for tabulating votes and grading standardized tests. Barcodes made it possible for any object that was to be sold or transported to have some computer readable information securely attached to it. Banks used magnetic ink on checks, supporting MICR scanning.
Punched card (1/2)
[pic]
"Overpunch" redirects here. For the code, see Signed overpunch.
A punched card [2] (or punch card or Hollerith card or IBM card), is a piece of stiff paper that contains digital information represented by the presence or absence of holes in predefined positions. Now almost an obsolete recording medium, punched cards were widely used throughout the 19th century for controlling textile looms and in the late 19th and early 20th century for operating fairground organs and related instruments. They were used through the 20th century in unit record machines for input, processing, and data storage. Early digital computers used punched cards, often prepared using keypunch machines, as the primary medium for input of both computer programs and data. Some voting machines use punched cards.
[pic]
Hollerith's Keyboard (pantograph) Punch, used for the 1890 census [1] .
Contents:
1. History
2. Card formats
3. IBM punched card manufacturing
4. Cultural impact
5. Standards
6. Card handling equipment
7. See also
8. Notes and References
9. Further reading
10. External links
1. History
[pic]
Punched cards in use in a Jacquard loom.
[pic]
Punched cards of a large dance organ
Punched cards were first used around 1725 by Basile Bouchon and Jean-Baptiste Falcon as a more robust form of the perforated paper rolls then in use for controlling textile looms in France. This technique was greatly improved by Joseph Marie Jacquard in his Jacquard loom in 1801.
[pic]
An 80-column punched card showing the 1964 EBCDIC character set, which added more special characters.
Semen Korsakov was reputedly the first to use the punched cards in informatics for information store and search. Korsakov announced his new method and machines in September 1832, and rather than seeking patents offered the machines for public use. [3]
[pic]
Semen Korsakov's punched card
Charles Babbage proposed the use of "Number Cards", pierced with certain holes and stand opposite levers connected with a set of figure wheels ... advanced they push in those levers opposite to which there are no holes on the card and thus transfer that number ... in his description of the Calculating Engine's Store [4] .
Herman Hollerith invented the recording of data on a medium that could then be read by a machine. Prior uses of machine readable media, such as those above (other than Korsakov), had been for control, not data. "After some initial trials with paper tape, he settled on punched cards..." [5] , developing punched card data processing technology for the 1890 US census. He founded the Tabulating Machine Company (1896) which was one of four companies that merged to form Computing Tabulating Recording Corporation (CTR), later renamed IBM. IBM manufactured and marketed a variety of unit record machines for creating, sorting, and tabulating punched cards, even after expanding into electronic computers in the late 1950s. IBM developed punched card technology into a powerful tool for business data-processing and produced an extensive line of general purpose unit record machines. By 1950, the IBM card and IBM unit record machines had become ubiquitous in industry and government. "Do not fold, spindle or mutilate," a generalized version of the warning that appeared on some punched cards (generally on those distributed as paper documents to be later returned for further machine processing, checks for example), became a motto for the post-World War II era (even though many people had no idea what spindle meant). [6]
From the 1900s, into the 1950s, punched cards were the primary medium for data entry, data storage, and processing in institutional computing. According to the IBM Archives: "By 1937... IBM had 32 presses at work in Endicott, N.Y., printing, cutting and stacking five to 10 million punched cards every day." [7] Punched cards were even used as legal documents, such as U.S. Government checks [8] and savings bonds. During the 1960s, the punched card was gradually replaced as the primary means for data storage by magnetic tape, as better, more capable computers became available. Punched cards were still commonly used for data entry and programming until the mid-1970s when the combination of lower cost magnetic disk storage, and affordable interactive terminals on less expensive minicomputers made punched cards obsolete for this role as well. [9] However, their influence lives on through many standard conventions and file formats. The terminals that replaced the punched cards, the IBM 3270 for example, displayed 80 columns of text in text mode, for compatibility with existing software. Some programs still operate on the convention of 80 text columns, although fewer and fewer do as newer systems employ graphical user interfaces with variable-width type fonts.
Today punched cards are mostly obsolete and replaced with other storage methods, except for a few legacy systems and specialized applications.
2. Card formats
[pic]
Standard 5081 card from a non-IBM manufacturer.
The early applications of punched cards all used specifically designed card layouts. It wasn't until around 1928 that punched cards and machines were made "general purpose". The rectangular, round, or oval bits of paper punched out are called chad (recently, chads) or chips (in IBM usage). Multi-character data, such as words or large numbers, were stored in adjacent card columns known as fields. A group of cards is called a deck. One upper corner of a card was usually cut so that cards not oriented correctly, or cards with different corner cuts, could be easily identified. Cards were commonly printed so that the row and column position of a punch could be identified. For some applications printing might have included fields, named and marked by vertical lines, logos, and more.
One of the most common printed punched cards was the IBM 5081. Indeed, it was so common that other card vendors used the same number (see image at right) and even users knew its number.
2. 1. Hollerith's punched card formats
Herman Hollerith was awarded a series of patents [10] in 1889 for mechanical tabulating machines. These patents described both paper tape and rectangular cards as possible recording media. The card shown in U.S. Patent 395,781 was preprinted with a template and had holes arranged close to the edges so they could be reached by a railroad conductor's ticket punch, with the center reserved for written descriptions. Hollerith was originally inspired by railroad tickets that let the conductor encode a rough description of the passenger:
"I was traveling in the West and I had a ticket with what I think was called a punch photograph...the conductor...punched out a description of the individual, as light hair, dark eyes, large nose, etc. So you see, I only made a punch photograph of each person." [11]
Use of the ticket punch proved tiring and error prone, so Hollerith invented a pantograph "keyboard punch" that allowed the entire card area to be used. It also eliminated the need for a printed template on each card, instead a master template was used at the punch; a printed reading card could be placed under a card that was to be read manually. Hollerith envisioned a number of card sizes. In an article he wrote describing his proposed system for tabulating the 1890 U.S. Census, Hollerith suggested a card 3 inches by 5½ inches of Manila stock "would be sufficient to answer all ordinary purposes." [12]
[pic]
Hollerith card as shown in the Railroad Gazette, April 19, 1895.
The cards used in the 1890 census had round holes, 12 rows and 24 columns. A census card and reading board for these cards can be seen at the Columbia University Computing History site. [13] At some point, 3.25 by 7.375 inches (3¼" x 7⅜") became the standard card size, a bit larger than the United States one-dollar bill of the time (the dollar was changed to its current size in 1929). The Columbia site says Hollerith took advantage of available boxes designed to transport paper currency.
Hollerith's original system used an ad-hoc coding system for each application, with groups of holes assigned specific meanings, e.g. sex or marital status. His tabulating machine had 40 counters, each with a dial divided into 100 divisions, with two indicator hands; one which stepped one unit with each counting pulse, the other which advanced one unit every time the other dial made a complete revolution. This arrangement allowed a count up to 10,000. During a given tabulating run, each counter was typically assigned a specific hole. Hollerith also used relay logic to allow counts of combination of holes, e.g. to count married females. [12]
Later designs standardized the coding, with 12 rows, where the lower ten rows coded digits 0 through 9. This allowed groups of holes to represent numbers that could be added, instead of simply counting units. Hollerith's 45 column punched cards are illustrated in Comrie's The application of the Hollerith Tabulating Machine to Brown's Tables of the Moon. [14] [15]
2. 2. IBM 80 column punched card format
[pic]
Card from a Fortran program: Z(1) = Y + W(1)
[pic]
"A good operator can turn out 1,500 punch cards daily." Operators compiling hydrographic data for navigation charts on punch cards, New Orleans, 1938.
This IBM card format, designed in 1928, [16] had rectangular holes, 80 columns with 12 punch locations each, one character to each column. Card size was exactly 7-3/8 inch by 3-1/4 inch (187.325 by 82.55 mm). The cards were made of smooth stock, 0.007 inch (0.178 mm) thick. There are about 143 cards to the inch. In 1964, IBM changed from square to round corners. [17]
The lower ten positions represented (from top to bottom) the digits 0 through 9. The top two positions of a column were called zone punches, 12 (top) and 11. Originally only numeric information was punched, with 1 punch per column indicating the digit. Signs could be added to a field by overpunching the least significant digit with a zone punch: 12 for plus and 11 for minus. Zone punches had other uses in processing as well, such as indicating a master record.
______________________________________________ /&-0123456789ABCDEFGHIJKLMNOPQR/STUVWXYZ Y / x xxxxxxxxx X| x xxxxxxxxx 0| x xxxxxxxxx 1| x x x x 2| x x x x 3| x x x x 4| x x x x 5| x x x x 6| x x x x 7| x x x x 8| x x x x 9| x x x x |________________________________________________ Reference: [18] Note: The Y and X zones were also called the 12 and 11 zones, respectively.
Later, multiple punches were introduced for upper-case letters and special characters [19] . A letter had 2 punches (zone [12,11,0] + digit [1-9]); a special character had 3 punches (zone [12,11,0] + digit [2-4] + 8). With these changes, the information represented in a column by a combination of zones [12, 11] and digits [1-9] was dependent on the use of that column. For example the combination "12-1" was the letter "A" in an alphabetic column, a plus signed digit "1" in a signed numeric column, or an unsigned digit "1" in a column where the "12" had some other use. The introduction of EBCDIC in 1964 allowed columns with as many as 6 punches (zones [12,11,0,8,9] + digit [1-7]). IBM and other manufacturers used many different 80-column card character encodings. [20] [21]
[pic]
Binary punched card.
For some computer applications, binary formats were used, where each hole represented a single binary digit (or "bit"), every column (or row) was treated as a simple bitfield, and every combination of holes was permitted. For example, the IBM 711 card reader used with the 704/709/7090/7094 series scientific computers treated every row as two 36-bit words. (The specific 72 columns used were selectable using a control panel, which was almost always wired to select columns 1-72, ignoring the last 8 columns.) Other computers, such as the IBM 1130 or System/360, used every column. The IBM 1402 could be used in "column binary" mode, which stored two characters in every column, or one 36-bit word in three columns.
As a prank, in binary mode, cards could be punched where every possible punch position had a hole. Such "lace cards" lacked structural strength, and would frequently buckle and jam inside the machine.
The 80-column card format dominated the industry, becoming known as just IBM cards, even though other companies made cards and equipment to process them.
2. 3. Mark sense cards
• Mark sense (Electrographic) cards, developed by Reynold B. Johnson at IBM, had printed ovals that could be marked with a special electrographic pencil. Cards would typically be punched with some initial information, such as the name and location of an inventory item. Information to be added, such as quantity of the item on hand, would be marked in the ovals. Card punches with an option to detect mark sense cards could then punch the corresponding information into the card.
2. 4. Aperture cards
[pic]
Aperture card (details suppressed).
• Aperture cards have a cut-out hole on the right side of the punched card. A 35 mm microfilm chip containing a microform image is mounted in the hole. Aperture cards are used for engineering drawings from all engineering disciplines. Information about the drawing, for example the drawing number, is typically punched and printed on the remainder of the card. Aperture cards have some advantages over digital systems for archival purposes. [22]
2. 5. IBM 51 column punched card format
This IBM card format was a shortened 80-column card; the shortening sometimes accomplished by tearing off, at a perforation, a stub from an 80 column card. These cards were used in some retail and inventory applications.
2. 6. IBM Port-A-Punch
[pic]
IBM Port-A-Punch
According to the IBM Archive: IBM's Supplies Division introduced the Port-A-Punch in 1958 as a fast, accurate means of manually punching holes in specially scored IBM punched cards. Designed to fit in the pocket, Port-A-Punch made it possible to create punched card documents anywhere. The product was intended for "on-the-spot" recording operations—such as physical inventories, job tickets and statistical surveys—because it eliminated the need for preliminary writing or typing of source documents.. [23] Unfortunately, the resulting holes were "furry" and sometimes caused problems with the equipment used to read the cards.
A pre-perforated card was used for Monash University's MIDITRAN Language; and the cards would sometimes lose chads in the reader, changing the contents of the student's data or program deck.
IBM 96 column punched card format
[pic]
A System/3 punched card.
In the early 1970s IBM introduced a new, smaller, round-hole, 96-column card format along with the IBM System/3 computer. [24] IBM 5496 Data Recorder, a keypunch machine with print and verify functions, and IBM 5486 Card Sorter were made for these 96-column cards.
These cards had tiny (1 mm), circular holes, smaller than those in paper tape. Data was stored in six-bit binary-coded decimal code, with three rows of 32 characters each, or 8-bit EBCDIC. In this format, each column of the top tiers are combined with 2 punch rows from the bottom tier to form an 8-bit byte, and the middle tier is combined with 2 more punch rows, so that each card contains 64 bytes of 8-bit-per-byte binary data. See Winter, Dik T. "96-column Punched Card Code". . Retrieved December 23, 2008.
2. 8. Powers/Remington Rand UNIVAC card formats
[pic]
A blank Remington-Rand UNIVAC format card. Card courtesy of MIT Museum.
The Powers/Remington Rand card format was initially the same as Holleriths; 45 columns and round holes. In 1930 Remington-Rand leap-frogged IBM's 1928 introduced 80 column format by coding two characters in each of the 45 columns - producing what is now commonly called the 90-column card [25] . For its character codings, see Winter, Dik T.. "90-column Punched Card Code". . Retrieved October 20, 2006.
3. IBM punched card manufacturing
IBM's Fred M. Carroll [26] developed a series of rotary type presses that were used to produce the well-known standard tabulating cards, including a 1921 model that operated at 400 cards per minute (cpm). Later, he developed a completely different press capable of operating at speeds in excess of 800 cpm, and it was introduced in 1936. [7] [27] Carroll's high-speed press, containing a printing cylinder, revolutionized the manufacture of punched tabulating cards. [28] It is estimated that between 1930 and 1950, the Carroll press accounted for as much as 25 per cent of the company's profits [29]
[pic]
A punched card printing plate.
Discarded printing plates from these card presses, each printing plate the size of an IBM card and formed into a cylinder, often found use as desk pen/pencil holders, and even today are collectable IBM artifacts (every card layout [30] had its own printing plate).
IBM initially required that its customers use only IBM manufactured cards with IBM machines, which were leased, not sold. IBM viewed its business as providing a service and that the cards were part of the machine. In 1932 the government took IBM to court on this issue, IBM fought all the way to the Supreme Court and lost; the court ruling that IBM could only set card specifications. In another case, heard in 1955, IBM signed a consent decree requiring, amongst other things, that IBM would by 1962 have no more than one-half of the punched card manufacturing capacity in the United States. Tom Watson Jr.'s decision to sign this decree, where IBM saw the punched card provisions as the most significant point, completed the transfer of power to him from Thomas Watson, Sr. [29]
4. Cultural impact
While punched cards have not been widely used for a generation, the impact was so great for most of the 20th century that they still appear from time to time in popular culture.
For example:
• Sculptor Maya Lin designed a controversial public art installation at Ohio University that looks like a punched card from the air. [31]
• Do Not Fold, Bend, Spindle or Mutilate: Computer Punch Card Art - a mail art exhibit by the Washington Pavilion in Sioux Falls, South Dakota.
• The Red McCombs School of Business at the University of Texas at Austin has artistic representations of punched cards decorating its exterior walls.
• At the University of Wisconsin - Madison, the Engineering Research Building's exterior windows were modeled after a punched card layout, during its construction in 1966. [1]
• In the Simpsons episode Much Apu About Nothing, Apu showed Bart his Ph.D. thesis, the world's first computer tic-tac-toe game, stored in a box full of punched cards.
• In the Futurama episode Mother's Day, as several robots are seen shouting 'Hey hey! Hey ho! 100110!' in protest, one of them burns a punch-card in a manner reminiscent of feminist bra burning. In another episode, Put Your Head On My Shoulder, Bender offers a dating service. He hands characters punch-cards so they can put in what they want, before throwing them in his chest cabinet and 'calculating' the 'match' for the person. Bender is shown both 'folding', 'bending', and 'mutilating' the card, accentuating the fact that he is making up the 'calculations'.
A legacy of the 80 column punched card format is that most character-based terminals display 80 characters per row. Even now, the default size for character interfaces such as the command prompt in Windows remains set at 80 columns. Some file formats, such as FITS, still use 80-character card images.
5. Standards
• ANSI INCITS 21-1967 (R2002), Rectangular Holes in Twelve-Row Punched Cards (formerly ANSI X3.21-1967 (R1997)) Specifies the size and location of rectangular holes in twelve-row 3-1/4 inch wide punched cards.
• ANSI X3.11 - 1990 American National Standard Specifications for General Purpose Paper Cards for Information Processing
• ANSI X3.26 - 1980/R1991) Hollerith Punched Card Code
• ISO 1681:1973 Information processing - Unpunched paper cards - Specification
• ISO 6586:1980 Data processing - Implementation of the ISO 7- bit and 8- bit coded character sets on punched cards. Defines ISO 7-bit and 8-bit character sets on punched cards as well as the representation of 7-bit and 8-bit combinations on 12-row punched cards. Derived from, and compatible with, the Hollerith Code, ensuring compatibility with existing punched card files.
6. Card handling equipment
Creation and processing of punched cards was handled by a variety of devices, including:
• Card readers
• Card punches
• Keypunches
• Unit record equipment
• Voting machines
Punched tape
[pic]
Punched tape or paper tape is a largely obsolete form of data storage, consisting of a long strip of paper in which holes are punched to store data. It was widely used during much of the twentieth century for teleprinter communication, and later as a storage medium for minicomputers and CNC machine tools.
Contents:
1. Origin
2. Tape formats
3. Applications
4. Limitations
5. Advantages
6. Punched tape in art
7. See also
8. External links
1. Origin
[pic]
One type of paper tape punch
The earliest forms of punched tape come from [weaving] looms and embroidery, where cards with simple instructions about a machine's intended movements were first fed individually as instructions, then controlled by instruction cards, and later were fed as a string of connected cards. (See Jacquard loom).
[pic]
A roll of punched tape
This led to the concept of communicating data not as a stream of individual cards, but one "continuous card", or a tape. Many professional embroidery operations still refer to those individuals who create the designs and machine patterns as "punchers", even though punched cards and paper tape were eventually phased out, after many years of use, in the 1990s.
In 1846 Alexander Bain used punched tape to send telegrams.
2. Tape formats
Data was represented by the presence or absence of a hole in a particular location. Tapes originally had five rows of holes for data. Later tapes had 6, 7 and 8 rows. A row of narrower holes ("sprocket holes") that were always punched served to feed the tape, typically with a wheel with radial pins called a "sprocket wheel." Text was encoded in several ways. The earliest standard character encoding was Baudot, which dates back to the nineteenth century and had 5 holes. Later standards, such as Teletypesetter (TTS), Fieldata and Flexowriter, had 6 holes. In the early 1960s, the American Standards Association led a project to develop a universal code for data processing, which became known as ASCII. This 7-level code was adopted by some teleprinter users, including AT&T (Teletype). Others, such as Telex, stayed with Baudot.
[pic]
The word "Wikipedia" as 7-bit ASCII (without a parity bit or with "space" parity)
2. 1. Chadless Tape
Most tape-punching equipment used solid punches to create holes in the tape. This process inevitably creates "chads", or small circular pieces of paper. Managing the disposal of chads was an annoying and complex problem, as the tiny paper pieces had a distressing tendency to escape and interfere with the other electromechanical parts of the teleprinter equipment.
One variation on the tape punch was a device called a Chadless Printing Reperforator. This machine would punch a received teleprinter signal into tape and print the message on it at the same time, using a printing mechanism similar to that of an ordinary page printer. The tape punch, rather than punching out the usual round holes, would instead punch little U-shaped cuts in the paper, so that no chads would be produced; the "hole" was still filled with a little paper trap-door. By not fully punching out the hole, the printing on the paper remained intact and legible. This enabled operators to read the tape without having to decipher the holes, which would facilitate relaying the message on to another station in the network. Also, of course, there was no "chad box" to empty from time to time. A disadvantage to this mechanism was that chadless tape, once punched, did not roll up well, because the protruding flaps of paper would catch on the next layer of tape, so it could not be rolled up tightly. Another disadvantage, as seen over time, was that there was no reliable way to read chadless tape by optical means employed by later high-speed readers. However, the mechanical tape readers used in most standard-speed equipment had no problem with chadless tape, because it sensed the holes by means of blunt spring-loaded sensing pins, which easily pushed the paper flaps out of the way.
3. Applications
3. 1. Communications
[pic]
Paper tape relay operation at FAA's Honolulu flight service station in 1964
Punched tape was used as a way of storing messages for teletypewriters. Operators typed in the message to the paper tape, and then sent the message at the maximum line speed from the tape.
This permitted the operator to prepare the message "off-line" at the operator's best typing speed, and permitted the operator to correct any error prior to transmission. An experienced operator could prepare a message at 135WPM (Words Per Minute) or more for short periods.
The line typically operated at 75WPM, but it operated continuously. By preparing the tape "off-line" and then sending the message with a tape reader, the line could operate continuously rather than depending on continuous "on-line" typing by a single operator. Typically, a single 75WPM line supported three or more teletype operators working offline.
Tapes punched at the receiving end could be used to relay messages to another station. Large store and forward networks were developed using these techniques.
3. 2. Minicomputers
[pic]
Software on paper tape for the Data General Nova minicomputer.
When the first minicomputers were being released, most manufacturers turned to the existing mass-produced ASCII teletypewriters (primarily the ASR33) as a low-cost solution for keyboard input and printer output. As a side effect punched tape became a popular medium for low cost storage, and it was common to find a selection of tapes containing useful programs in most minicomputer installations. Faster optical readers were also common.
3. 3. Cash registers
National Cash Register or NCR (Dayton Ohio) made cash registers around 1970 that would punch paper tape. The tape could then be read into a computer and not only could sales information be summarized, billings could be done on charge transactions.
3. 4. Newspaper Industry
Punched paper tape was used by the newspaper industry until the mid 1970's or later. Newspapers were typically set in hot lead by devices such as a linotype. With the wire services coming into a device that would punch paper tape, rather than the linotype operator having to retype all the incoming wire stories, the paper tape could be put into a paper tape reader on the linotype and it would create the lead slugs without the operator re-typing the stories. This also allowed newspapers to use devices such as the Friden Flexowriter, to convert typing to lead type via tape. Even after the demise of the Linotype/hot lead, many early "offset" devices had paper tape readers on them to produce the news-story copy.
3. 5. Automated machinery
In the 1970s, computer-aided manufacturing equipment often used paper tape. Paper tape was a very important storage medium for computer-controlled wire-wrap machines, for example. A paper tape reader was smaller and much less expensive than hollerith card or magnetic tape readers. Premium black waxed and lubricated long-fiber papers, and Mylar film tape were invented so that production tapes for these machines would last longer.
3. 6. Cryptography
Paper tape was the basis of the Vernam cipher, invented in 1917. During the last third of the 20th century, the U.S. National Security Agency used punched paper tape to distribute cryptographic keys. The 8-level paper tapes were distributed under strict accounting controls and were read by a fill device, such as the hand held KOI-18, that was temporarily connected to each security device that needed new keys. NSA has been trying to replace this method with a more secure electronic key management system (EKMS), but paper tape is apparently still being employed.
[pic]
Fanfold paper tape.
4. Limitations
The three biggest problems with paper tape were:
• Reliability. It was common practice to follow each mechanical copying of a tape with a manual hole by hole comparison.
• Rewinding the tape was difficult and prone to problems. Great care was needed to avoid tearing the tape. Some systems used fanfold paper tape rather than rolled paper tape. In these systems, no rewinding was necessary nor were any fancy supply reel, takeup reel, or tension arm mechanisms required; the tape merely fed from the supply tank through the reader to the takeup tank, refolding itself back into the exact same form as when it was fed into the reader.
• Low information density. Datasets much larger than a few dozen kilobytes are impractical to handle in paper tape format.
5. Advantages
Punched tape does have some useful properties:
• Longevity. Although many magnetic tapes have deteriorated over time to the point that the data on them has been irretrievably lost, punched tape can be read many decades later, if acid-free paper or Mylar film is used. Some paper can degrade rapidly.
• Human accessibility. The hole patterns can be decoded visually if necessary, and torn tape can be repaired (using special all-hole pattern tape splices). Editing text on a punched tape was achieved by literally cutting and pasting the tape with scissors, glue, or by taping over a section to cover all holes and making new holes using a manual hole punch.
• Magnetic field immunity. In a machine shop full of powerful electric motors, the numerical control programs need to survive the magnetic fields generated by those motors. [1] [2]
6. Punched tape in art
A computing or telecommunications professional depicted in the Monument to the Conquerors of Space in Moscow (1964) holds what appears to be a punched tape with three rows of rectangular holes.
USB flash drive consists of a flash memory data storage device integrated with a USB (Universal Serial Bus) 1.1 or 2.0 interface. USB flash drives are typically removable and rewritable, and physically much smaller than a floppy disk. Most weigh less than 30 g (1 oz). [1] Storage capacities in 2010 can be as large as 256 GB [2] with steady improvements in size and price per capacity expected. Some allow 1 million write or erase cycles [3] [4] and have a 10-year data retention cycle.[citation needed]
[pic]
A 16 GB USB retractable flash drive.
USB flash drives are often used for the same purposes as floppy disks were. They are smaller, faster, have thousands of times more capacity, and are more durable and reliable because of their lack of moving parts. Until approximately 2005, most desktop and laptop computers were supplied with floppy disk drives, but most recent equipment has abandoned floppy disk drives in favor of USB ports.
Flash drives use the USB mass storage standard, supported natively by modern operating systems such as Windows, Mac OS X, Linux, and other Unix-like systems. USB drives with USB 2.0 support can store more data and transfer faster than a much larger optical disc drive and can be read by most other systems such as the PlayStation 3.
Nothing moves mechanically in a flash drive; the term drive persists because computers read and write flash-drive data using the same system commands as for a mechanical disk drive, with the storage appearing to the computer operating system and user interface as just another drive. [4] Flash drives are very robust mechanically.
A flash drive consists of a small printed circuit board carrying the circuit elements and a USB connector, insulated electrically and protected inside a plastic, metal, or rubberized case which can be carried in a pocket or on a key chain, for example. The USB connector may be protected by a removable cap or by retracting into the body of the drive, although it is not likely to be damaged if unprotected. Most flash drives use a standard type-A USB connection allowing plugging into a port on a personal computer, but drives for other interfaces also exist.
Most USB flash drives draw their power from the USB connection, and do not require a battery. Some devices that combine the functionality of a digital audio player with flash-drive-type storage require a battery for the player function.
Contents:
1. Technology
2. History
3. Design and implementation
4. Fake products
5. Uses
6. Advantages and disadvantages
7. Comparison with other portable storage
8. Naming
9. Current and future developments
10. Flash drives for non-USB interfaces
11. See also
12. References
1. Technology
Main articles: Flash memory and USB
Flash memory combines a number of older technologies, with lower cost, lower power consumption and small size made possible by recent advances in microprocessor technology. The memory storage was based on earlier EPROM and EEPROM technologies. These had very limited capacity, were very slow for both reading and writing, required complex high-voltage drive circuitry, and could only be re-written after erasing the entire contents of the chip.
Hardware designers later developed EEPROMs with the erasure region broken up into smaller "fields" that could be erased individually without affecting the others. Altering the contents of a particular memory location involved copying the entire field into an off-chip buffer memory, erasing the field, modifying the data as required in the buffer, and re-writing it into the same field. This required considerable computer support, and PC-based EEPROM flash memory systems often carried their own dedicated microprocessor system. Flash drives are more or less a miniaturized version of this.
The development of high-speed serial data interfaces such as USB made semiconductor memory systems with serially accessed storage viable, and the simultaneous development of small, high-speed, low-power microprocessor systems allowed this to be incorporated into extremely compact systems. Serial access requires far fewer electrical connections for the memory chips than does parallel access, which has simplified the manufacture of multi-gigabyte drives.
Computers access modern flash memory systems very much like hard disk drives, where the controller system has full control over where information is actually stored. The actual EEPROM writing and erasure processes are, however, still very similar to the earlier systems described above.
Many low-cost MP3 players simply add extra software and a battery to a standard flash memory control microprocessor so it can also serve as a music playback decoder. Most of these players can also be used as a conventional flash drive, for storing files of any type.
2. History
2. 1. First commercial product
Trek Technology and IBM began selling the first USB flash drives commercially in 2000. The Singaporean Trek Technology sold a model under the brand name "ThumbDrive", and IBM marketed the first such drives in North America with its product named the "DiskOnKey" -which was developed and manufactured by the Israeli company M-Systems. IBM's USB flash drive became available on December 15, 2000, [5] and had a storage capacity of 8 MB, more than five times the capacity of the then-common floppy disks.
In 2000 Lexar introduced a Compact Flash (CF) card with a USB connection, and a companion card read/writer and USB cable that eliminated the need for a USB hub.
In 2002 Netac Technology, a Shenzen consumer electronics company which claims to have invented the USB flash drive in the late 1990s, [6] was granted a Chinese patent for the device. [7]
Both Trek Technology and Netac Technology have tried to protect their patent claims. Trek won a Singaporean suit, [8] but a court in the United Kingdom revoked one of Trek's UK patents. [9] While Netac Technology has brought lawsuits against PNY Technologies, [7] Lenovo, [10] aigo, [11] Sony, [12] [13] [14] and Taiwan's Acer and Tai Guen Enterprise Co, [14] most companies that manufacture USB flash drives do so without regard for Trek and Netac's patents.
Phison Electronics Corporation claims to have produced the earliest "USB flash removable disk" dubbed the "Pen Drive" in May 2001. [15] [16]
2. 2. Second generation
Modern flash drives have USB 2.0 connectivity. However, they do not currently use the full 480 Mbit/s (60MB/s) the USB 2.0 Hi-Speed specification supports because of technical limitations inherent in NAND flash. The fastest drives currently available use a dual channel controller, although they still fall considerably short of the transfer rate possible from a current generation hard disk, or the maximum high speed USB throughput.
File transfer speeds vary considerably, and should be checked before purchase. Speeds may be given in Mbyte per second, Mbit per second, or optical drive multipliers such as "180X" (180 times 150 KiB per second). Typical fast drives claim to read at up to 30 megabytes/s (MB/s) and write at about half that, about 20 times faster than older "USB full speed" devices, which are limited to a maximum speed of 12 Mbit/s (1.5 MB/s).
3. Design and implementation
One end of the device is fitted with a single male type-A USB connector. Inside the plastic casing is a small printed circuit board. Mounted on this board is some power circuitry and a small number of surface-mounted integrated circuits (ICs). Typically, one of these ICs provides an interface to the USB port, another drives the onboard memory, and the other is the flash memory.
Drives typically use the USB mass storage device class to communicate with the host.
|[pic]Internals of a typical USB flash drive |
|1 |USB connector |
|2 |USB mass storage controller device |
|3 |Test points |
|4 |Flash memory chip |
|5 |Crystal oscillator |
|6 |LED |
|7 |Write-protect switch (Optional) |
|8 |Space for second flash memory chip |
3. 1. Essential components
There are typically four parts to a flash drive:
• Male type-A USB connector - provides an interface to the host computer.
• USB mass storage controller - implements the USB host controller. The controller contains a small microcontroller with a small amount of on-chip ROM and RAM.
• NAND flash memory chip - stores data. NAND flash is typically also used in digital cameras.
• Crystal oscillator - produces the device's main 12 MHz clock signal and controls the device's data output through a phase-locked loop.
3. 2. Additional components
The typical device may also include:
• Jumpers and test pins - for testing during the flash drive's manufacturing or loading code into the microprocessor.
• LEDs - indicate data transfers or data reads and writes.
• Write-protect switches - Enable or disable writing of data into memory.
• Unpopulated space - provides space to include a second memory chip. Having this second space allows the manufacturer to use a single printed circuit board for more than one storage size device.
• USB connector cover or cap - reduces the risk of damage, prevents the ingress of fluff or other contaminants, and improves overall device appearance. Some flash drives use retractable USB connectors instead. Others have a swivel arrangement so that the connector can be protected without removing anything.
• Transport aid - the cap or the body often contains a hole suitable for connection to a key chain or lanyard. Connecting the cap, rather than the body, can allow the drive itself to be lost.
• Some drives offer expandable storage via an internal memory card slot, much like a memory card reader. [17] [18]
3. 3. Size and style of packaging
[pic]
Flash drives come in various, sometimes bulky or novelty, shapes and sizes, in this case ikura sushi
Some manufacturers differentiate their products by using elaborate housings, which are often bulky and make the drive difficult to connect to the USB port. Because the USB port connectors on a computer housing are often closely spaced, plugging a flash drive into a USB port may block an adjacent port. Such devices may only carry the USB logo if sold with a separate extension cable.
USB flash drives have been integrated into other commonly carried items such as watches, pens, and even the Swiss Army Knife; others have been fitted with novelty cases such as toy cars or LEGO bricks. The small size, robustness and cheapness of USB flash drives make them an increasingly popular peripheral for case modding.
Heavy or bulky flash drive packaging can make for unreliable operation when plugged directly into a USB port; this can be relieved by a USB extension cable. Such cables are USB-compatible but do not conform to the USB standard. [19] [20]
3. 4. File system
Main article: Flash file system
Most flash drives ship preformatted with the FAT or FAT 32 file system. The ubiquity of this file system allows the drive to be accessed on virtually any host device with USB support. Also, standard FAT maintenance utilities (e.g. ScanDisk) can be used to repair or retrieve corrupted data. However, because a flash drive appears as a USB-connected hard drive to the host system, the drive can be reformatted to any file system supported by the host operating system.
Defragmenting: Flash drives can be defragmented, but this brings little advantage as there is no mechanical head that moves from fragment to fragment. Flash drives often have a large internal sector size, so defragmenting means accessing fewer sectors. Defragmenting shortens the life of the drive by making many unnecessary writes. [21]
Even Distribution: Some file systems are designed to distribute usage over an entire memory device without concentrating usage on any part (e.g. for a directory); this even distribution prolongs the life of simple flash memory devices. Some USB flash drives have this functionality built into the software controller to prolong device life, while others do not, therefore the end user should check the specifications of his device prior to changing the file system for this reason. [22]
Hard Drive: Sectors are 512 bytes long, for compatibility with hard drives, and first sector can contain a Master Boot Record and a partition table. Therefore USB flash units can be partitioned as hard drives.
4. Fake products
Fake USB flash drives are fairly common.[citation needed] These are typically low capacity USB drives which are modified so that they emulate larger capacity drives (e.g. a 2 GB drive being marketed as an 8 GB drive). When plugged into a computer, they report themselves as being the larger capacity they were sold as, but when data is written to them, either the write fails, the drive freezes up, or it overwrites existing data. Software tools exist to check and detect fake USB drives.[citation needed]
5. Uses
[pic]
USB flash drive with an Ubuntu-branded lanyard.
5. 1. Personal data transport
The most common use of flash drives is to transport and store personal files such as documents, pictures and videos. Individuals also store medical alert information on MedicTag flash drives for use in emergencies and for disaster preparation.
5. 2. Secure storage of data, application and software files
With wide deployment(s) of flash drives being used in various environments (secured or otherwise), the issue of data and information security remains of the utmost importance. The use of biometrics and encryption is becoming the norm with the need for increased security for data; OTFE systems such as FreeOTFE and TrueCrypt are particularly useful in this regard, as they can transparently encrypt large amounts of data. In some cases a Secure USB Drive may use a hardware-based encryption mechanism that uses a hardware module instead of software for strongly encrypting data. IEEE 1667 is an attempt to create a generic authentication platform for USB drives and enjoys the support of Microsoft with support in Windows 7.
USB flash drive (2/3)
5. 3. System administration
Flash drives are particularly popular among system and network administrators, who load them with configuration information and software used for system maintenance, troubleshooting, and recovery. They are also used as a means to transfer recovery and antivirus software to infected PCs, allowing a portion of the host machine's data to be archived. As the drives have increased in storage space, they have also replaced the need to carry a number of CD ROMs and installers which were needed when reinstalling or updating a system.
5. 4. Application carriers
Flash drives are used to carry applications that run on the host computer without requiring installation. While any standalone application can in principle be used this way, many programs store data, configuration information, etc. on the hard drive and registry of the host computer
The U3 company works with drive makers (parent company SanDisk as well as others) to deliver custom versions of applications designed for Microsoft Windows from a special flash drive; U3-compatible devices are designed to autoload a menu when plugged into a computer running Windows. Applications must be modified for the U3 platform not to leave any data on the host machine. U3 also provides a software framework for independent software vendors interested in their platform.
Ceedo is an alternative product with the key difference that it does not require Windows applications to be modified in order for them to be carried and run on the drive.
Similarly, other application virtualization solutions and portable application creators, such as VMware ThinApp (for Windows) or RUNZ (for Linux) can be used to run software from a flash drive without installation.
A wide range of portable applications which are all free of charge, and able to run off a computer running Windows without storing anything on the host computer's drives or registry, can be found in the list of portable software.
5. 5. Computer forensics and law enforcement
A recent development for the use of a USB Flash Drive as an application carrier is to carry the Computer Online Forensic Evidence Extractor (COFEE) application developed by Microsoft. COFEE is a set of applications designed to search for and extract digital evidence on computers confiscated from suspects. [23] Forensic software should not alter the information stored on the computer being examined in any way; other forensic suites run from CD-ROM or DVD-ROM, but cannot store data on the media they are run from (although they can write to other attached devices such as external drives or memory sticks).
5. 6. Booting operating systems
Most current PC firmware permits booting from a USB drive, allowing the launch of an operating system from a bootable flash drive. Such a configuration is known as a Live USB.
While a Live USB could be used for general-purpose applications, size and memory wear make them poor choices compared to alternatives. They are more suited to special-purpose or temporary tasks, such as:
• Loading a minimal, hardened kernel for embedded applications (e.g. network router, firewall).
• Bootstrapping an operating system install or disk cloning operation, often across a network.
• Maintenance tasks, such as virus scanning or low-level data repair, without the primary host operating system loaded.
5. 7. Windows Vista and Windows 7 ReadyBoost
In Windows Vista and Windows 7, the ReadyBoost feature allows use of flash drives (up to 4 GB in the case of Windows Vista) to augment operating system memory [24]
5. 8. Audio players
Many companies make small solid-state digital audio players, essentially producing flash drives with sound output and a simple user interface. Examples include the Creative MuVo, Philips GoGear and the iPod shuffle(First generation). Some of these players are true USB flash drives as well as music players; others do not support general-purpose data storage.
Many of the smallest players are powered by a permanently fitted rechargeable battery, charged from the USB interface.
5. 9. Music storage and marketing
Digital audio files can be transported from one computer to another like any other file, and played on a compatible media player (with caveats for DRM-locked files). In addition, many home Hi-Fi and car stereo head units are now equipped with a USB port. This allows a USB flash drive containing media files in a variety of formats to be played directly on devices which support the format.
Artists have sold or given away USB flash drives, with the first instance believed to be in 2004 when the German band WIZO released the "Stick EP", only as a USB drive. In addition to five high-bitrate MP3s, it also included a video, pictures, lyrics, and guitar tablature. Subsequently artists including Kanye West, [25] Nine Inch Nails, Kylie Minogue [26] and Ayumi Hamasaki [27] have released music and promotional material on USB flash drives. In 2009 a USB drive holding fourteen remastered Beatles albums in both FLAC and MP3 was released.
5. 10. In arcades
In the arcade game In the Groove and more commonly In The Groove 2, flash drives are used to transfer high scores, screenshots, dance edits, and combos throughout sessions. As of software revision 21 (R21), players can also store custom songs and play them on any machine on which this feature is enabled. While use of flash drives is common, the drive must be Linux compatible.
In the arcade games Pump it Up NX2 and Pump it Up NXA, a special produced flash drive is used as a "save file" for unlocked songs, as well as progressing in the WorldMax and Brain Shower sections of the game.
In the arcade game Dance Dance Revolution X, an exclusive USB flash drive was made by Konami for the purpose of the link feature from its Sony PlayStation 2 counterpart. However, any USB flash drives can be used in this arcade game.
5. 11. Brand and product promotion
The availability of inexpensive flash drives has enabled them to be used for promotional and marketing purposes, particularly within technical and computer-industry circles (e.g. technology trade shows). They may be given away for free, sold at less than wholesale price, or included as a bonus with another purchased product.
Usually, such drives will be custom-stamped with a company's logo, as a form of advertising to increase mind share and brand awareness. The drive may be a blank drive, or preloaded with graphics, documentation, web links, Flash animation or other multimedia, and free or demonstration software. Some preloaded drives are read-only; others are configured with a read-only and a writeable partition. Dual-partition drives are more expensive.
Flash drives can be set up to automatically launch stored presentations, websites, articles, and any other software immediately on insertion of the drive using the Microsoft Windows AutoRun feature. [28] Autorunning software this way does not work on all computers, and is normally disabled by security-conscious users.
5. 12. Backup
Some value-added resellers are now using a flash drive as part of small-business turnkey solutions (e.g. point-of-sale systems). The drive is used as a backup medium: at the close of business each night, the drive is inserted, and a database backup is saved to the drive. Alternatively, the drive can be left inserted through the business day, and data regularly updated. In either case, the drive is removed at night and taken offsite.
• This is simple for the end-user, and more likely to be done;
• The drive is small and convenient, and more likely to be carried off-site for safety;
• The drives are less fragile mechanically and magnetically than tapes;
• The capacity is often large enough for several backup images of critical data;
• And flash drives are cheaper than many other backup systems.
It is also easy to lose these small devices, and easy for people without a right to data to take illicit backups.
6. Advantages and disadvantages
6. 1. Advantages
Data stored on flash drives are impervious to scratches and dust, and flash drives are mechanically very robust making them suitable for transporting data from place to place and keeping it readily at hand. Most personal computers support USB as of 2009.
Flash drives also store data densely compared to many removable media. In mid-2009, 256 GB drives became available, with the ability to hold many times more data than a DVD or even a Blu-ray disc.
Compared to hard drives, flash drives use little power, have no fragile moving parts, and for low capacities are small and light.
Flash drives implement the USB mass storage device class so that most modern operating systems can read and write to them without installing device drivers. The flash drives present a simple block-structured logical unit to the host operating system, hiding the individual complex implementation details of the various underlying flash memory devices. The operating system can use any file system or block addressing scheme. Some computers can boot up from flash drives.
Some flash drives retain their memory even after being submerged in water, [29] even through a machine wash, although this is not a design feature and not to be relied upon. Leaving the flash drive out to dry completely before allowing current to run through it has been known to result in a working drive with no future problems. Channel Five's Gadget Show cooked a flash drive with propane, froze it with dry ice, submerged it in various acidic liquids, ran over it with a jeep and fired it against a wall with a mortar. A company specializing in recovering lost data from computer drives managed to recover all the data on the drive. [30] All data on the other removable storage devices tested, using optical or magnetic technologies, were destroyed.
6. 2. Disadvantages
Like all flash memory devices, flash drives can sustain only a limited number of write and erase cycles before failure. [31] [32] This should be a consideration when using a flash drive to run application software or an operating system. To address this, as well as space limitations, some developers have produced special versions of operating systems (such as Linux in Live USB) [33] or commonplace applications (such as Mozilla Firefox) designed to run from flash drives. These are typically optimized for size and configured to place temporary or intermediate files in the computer's main RAM rather than store them temporarily on the flash drive.
Most USB flash drives do not include a write-protect mechanism, although some have a switch on the housing of the drive itself to keep the host computer from writing or modifying data on the drive. Write-protection makes a device suitable for repairing virus-contaminated host computers without risk of infecting the USB flash drive itself.
A drawback to the small size is that they are easily misplaced, left behind, or otherwise lost. This is a particular problem if the data they contain are sensitive (see data security). As a consequence, some manufacturers have added encryption hardware to their drives—although software encryption systems achieve the same thing, and are universally available for all USB flash drives. Others just have the possibility of being attached to keychains, necklaces and lanyards. To protect the USB plug from possible damage or contamination by the contents of a pocket or handbag, and to cover the sharp edge, it is usually fitted with a removable protective cap, or is retractable.
Compared to other portable storage devices such as external hard drives, USB flash drives still have a high price per unit of storage and were, until recently, only available in comparatively small capacities. This balance is changing, but the rate of change is slowing. Hard drives have a higher minimum price, so in the smaller capacities (16 GB and less), USB flash drives are much less expensive than the smallest available hard drives. [34] [35]
7. Comparison with other portable storage
7. 1. Tape
The applications of current data tape cartridges hardly overlap those of flash drives: cost per gigabyte is very low, the drives and media are expensive, have very high capacity and very fast transfer speeds, and store data sequentially. While disk-based backup is the primary medium of choice for most companies, tape backup is still popular for taking data off-site for worst-case scenarios. See LTO tapes.
7. 2. Floppy disk
[pic]
Size comparison of a flash drive and a 3.5-inch floppy disk
Floppy disk drives are rarely fitted to modern computers and are obsolete for normal purposes, although internal and external drives can be fitted if required. Floppy disks may be the method of choice for transferring data to and from very old computers without USB or booting from floppy disks, and so they are sometimes used to change the firmware on, for example, BIOS chips. Devices with removable storage like older Yamaha music keyboards are also dependent on floppy disks, which require computers to process them. Newer devices are built with USB flash drive support.
7. 3. Optical media
The various writable and rewritable forms of CD and DVD are portable storage media supported by the vast majority of computers as of 2008. CD-R, DVD-R, and DVD+R can be written to only once, RW varieties up to about 1,000 erase/write cycles, while modern NAND-based flash drives often last for 500,000 or more erase/write cycles. [36] DVD-RAM discs are the most suitable optical discs for data storage involving much rewriting.
Optical storage devices are among the cheapest methods of mass data storage after the hard drive. They are slower than their flash-based counterparts. Standard 12 cm optical discs are larger than flash drives and more subject to damage. Smaller optical media do exist, such as business card CD-Rs which have the same dimensions as a credit card, and the slightly less convenient but higher capacity 8 cm recordable CD/DVDs. The small discs are more expensive than the standard size, and do not work in all drives.
Universal Disk Format (UDF) version 1.50 and above has facilities to support rewritable discs like sparing tables and virtual allocation tables, spreading usage over the entire surface of a disc and maximising life, but many older operating systems do not support this format. Packet-writing utilities such as DirectCD and InCD are available but produce discs that are not universally readable (although based on the UDF standard). The Mount Rainier standard addresses this shortcoming in CD-RW media by running the older file systems on top of it and performing defect management for those standards, but it requires support from both the CD/DVD burner and the operating system. Many drives made today do not support Mount Rainier, and many older operating systems such as Windows XP and below, and Linux kernels older than 2.6.2, do not support it (later versions do). Essentially CDs/DVDs are a good way to record a great deal of information cheaply and have the advantage of being readable by most standalone players, but they are poor at making ongoing small changes to a large collection of information. Flash drives' ability to do this is their major advantage over optical media.
. Flash memory cards
Flash memory cards, e.g. Secure Digital cards, are available in various formats and capacities, and are used by many consumer devices. However, while virtually all PCs have USB ports, allowing the use of USB flash drives, memory card readers are not commonly supplied as standard equipment (particularly with desktop computers). Although inexpensive card readers are available that read many common formats, this results in two pieces of portable equipment (card plus reader) rather than one.
Some manufacturers, aiming at a "best of both worlds" solution, have produced card readers that approach the size and form of USB flash drives (e.g. Kingston MobileLite, [37] SanDisk MobileMate. [38] ) These readers are limited to a specific subset of memory card formats (such as SD, microSD, or Memory Stick), and often completely enclose the card, offering durability and portability approaching, if not quite equal to, that of a flash drive. Although the combined cost of a mini-reader and a memory card is usually slightly higher than a USB flash drive of comparable capacity, the reader + card solution offers additional flexibility of use, and virtually "unlimited" capacity.
An additional advantage of memory cards is that many consumer devices (e.g. digital cameras, portable music players) cannot make use of USB flash drives (even if the device has a USB port) whereas the memory cards used by the devices can be read by PCs with a card reader.
7. 5. External hard disk
Main article: External hard disk drive
Particularly with the advent of USB, external hard disks have become widely available and inexpensive. External hard disk drives currently cost less per gigabyte than flash drives and are available in larger capacities. Some hard drives support alternative and faster interfaces than USB 2.0 (e.g. IEEE 1394 and eSATA). For writes and consecutive sector reads (for example, from an unfragmented file) most hard drives can provide a much higher sustained data rate than current NAND flash memory.
Unlike solid-state memory, hard drives are susceptible to damage by shock, e.g., a short fall, vibration, have limitations on use at high altitude, and although they are shielded by their casings, they are vulnerable when exposed to strong magnetic fields. In terms of overall mass, hard drives are usually larger and heavier than flash drives; however, hard disks sometimes weigh less per unit of storage. Hard disks also suffer from file fragmentation which can reduce access speed.
7. 6. Obsolete devices
Audio tape cassettes are no longer used for data storage. High-capacity floppy disks (e.g. Imation SuperDisk), and other forms of drives with removable magnetic media such as the Iomega Zip and Jaz drives are now largely obsolete and rarely used. There are products in today's market which will emulate these legacy drives for both tape & disk (SCSI1/SCSI2, SASI, Magneto optic, Ricoh ZIP, Jaz, IBM3590/ Fujitsu 3490E and Bernoulli for example) in state of the art Compact Flash storage devices - CF2SCSI.
7. 7. Encryption
As highly portable media, USB flash drives are easily lost or stolen. All USB flash drives can have their contents encrypted using third party disk encryption software such as FreeOTFE and TrueCrypt or programs which can use encrypted archives such as ZIP and RAR. Some of these programs can be used without installation. The executable files can be stored on the USB drive, together with the encrypted file image. The encrypted partition can then be accessed on any computer running the correct operating system, although it may require the user to have administrative rights on the host computer to access data. Some vendors have produced USB flash drives which use hardware based encryption as part of the design, thus removing the need for third-party encryption software.
Other flash drives allow the user to configure secure and public partitions of different sizes, and offer hardware encryption.
Newer flash drives support biometric fingerprinting to confirm the user's identity. As of mid-2005, this was a costly alternative to standard password protection offered on many new USB flash storage devices. Most fingerprint scanning drives rely upon the host operating system to validate the fingerprint via a software driver, often restricting the drive to Microsoft Windows computers. However, there are USB drives with fingerprint scanners which use controllers that allow access to protected data without any authentication. [39]
Some manufacturers deploy physical authentication tokens in the form of a flash drive. These are used to control access to a sensitive system by containing encryption keys or, more commonly, communicating with security software on the target machine. The system is designed so the target machine will not operate except when the flash drive device is plugged into it. Some of these "PC lock" devices also function as normal flash drives when plugged into other machines.
7. 8. Security threats
Flash drives present a significant security challenge for large organizations. Their small size and ease of use allows unsupervised visitors or employees to store and smuggle out confidential data with little chance of detection. Both corporate and public computers are vulnerable to attackers connecting a flash drive to a free USB port and using malicious software such as keyboard loggers or packet sniffers.
For computers set up to be bootable from a USB drive it is possible to use a flash drive containing a bootable portable operating system to access the files of a computer even if the computer is password protected. The password can then be changed; or it may be possible to crack the password with a password cracking program, and gain full control over the computer. Encrypting files provides considerable protection against this type of attack.
USB flash drives may also be used deliberately or unwittingly to transfer malware and autorun worms onto a network.
Some organizations forbid the use of flash drives, and some computers are configured to disable the mounting of USB mass storage devices by users other than administrators; others use third-party software to control USB usage. The use of software allows the administrator to not only provide a USB lock but also control the use of CD-RW, SD cards and other memory devices. This enables companies with policies forbidding the use of USB flash drives in the workplace to enforce these policies. In a lower-tech security solution, some organizations disconnect USB ports inside the computer or fill the USB sockets with epoxy.
7. 9. Security breaches
Examples of security breaches as a result of using USB drives include:
• In the United States:
o A USB drive was stolen with names, grades, and social security numbers of 6,500 former students. [40]
8. Naming
By August 2008, "USB flash drive" had emerged as a common term for these devices, and most major manufacturers [41] use similar wording on their packaging, although potentially confusing alternatives (such as Memory Stick or USB memory key) still occur.
The myriad different brand names and terminology used, in the past and currently, make USB flash drives more difficult for manufacturers to market and for consumers to research. Some commonly-used names actually represent trademarks of particular companies, such as Cruzer, TravelDrive, ThumbDrive, and Disgo.
9. Current and future developments
Semiconductor corporations have worked to reduce the cost of the components in a flash drive by integrating various flash drive functions in a single chip, thereby reducing the part-count and overall package-cost.
Flash drive capacities on the market increase continually. As of 2008 few manufacturers continue to produce models of 256 MB and smaller; and many have started to phase out 512 MB capacity flash memory. High speed has become a standard for modern flash drives and capacities of up to 256 GB have come on the market, as of 2009.
Lexar is attempting to introduce a USB FlashCard, [42] [43] which would be a compact USB flash drive intended to replace various kinds of flash memory cards. Pretec introduced a similar card, which also plugs into every USB port, but is just one quarter the thickness of the Lexar model. [44] SanDisk has a product called SD Plus, which is a SecureDigital card with a USB connector. [45]
SanDisk has also introduced a new technology to allow controlled storage and usage of copyrighted materials on flash drives, primarily for use by students. This technology is termed FlashCP.
10. Flash drives for non-USB interfaces
See also: Solid-state drive
The majority of flash drives use USB, but some flash drives use other interfaces, such as IEEE1394 (FireWire), [46] [47] one of their theoretical advantages when compared to USB drives being the minimal latency and CPU utilisation that the IEEE1394 protocol provides, but in practice because of the prevalence of the USB interfaces all IEEE1394-based flash drives that have appeared used old slow flash memory chips [48] and no manufacturer sells IEEE1394 flash drives with modern fast flash memory as of 2009, and the currently available models go up only to 4 GB, [49] 8 GB [47] or 16 GB, depending on the manufacturer. FireWire flash drives that needs to be connected to FireWire 400 port cannot be connected to a FireWire 800 port and vice-versa.
In late 2008, flash drives that utilize the eSATA interface became available. One advantage that an eSATA flash drive claims over a USB flash drive is increased data throughput, thereby resulting in faster data read and write speeds. [50] However, using eSATA for flash drives also has some disadvantages. The eSATA connector was designed primarily for use with external hard disk drives that often include their own separate power supply. Therefore, unlike USB, an eSATA connector does not provide any usable electrical power other than what is required for signaling and data transfer purposes. This means that an eSATA flash drive still requires an available USB port or some other external source of power to operate it. Additionally, as of September 2009, eSATA is still a fairly uncommon interface on most home computers, therefore very few systems can currently make use of the increased performance offered via the eSATA interface on such-equipped flash drives. Finally, with the exception of eSATA-equipped laptop computers, most home computers that include one or more eSATA connectors usually locate the ports on the back of the computer case, thus making accessibility difficult in certain situations and complicating insertion and removal of the flash drive.
External hard disk drive
[pic]
An external hard disk drive is a type of hard disk drive which is connected to a computer by a USB cable or other means. Modern entries into the market consist of standard SATA, IDE, or SCSI hard drives in portable disk enclosures with SCSI, USB, IEEE 1394 Firewire, eSATA client interfaces to connect to the host computer.
Contents:
1. History
2. Structure and design
3. Compatibility
4. See also
5. References
6. External links
1. History
[pic]
Apple Lisa with a top-mounted ProFile hard drive.
Main article: History of hard disk drives
The first commercial hard disks were large and cumbersome, were not stored within the computer itself, and therefore fit within the definition of an external hard disk. The hard disk platters were stored within protective covers or memory units, which sit outside. These hard disks soon evolved to be compact enough that the disks were able to be mounted into bays inside a computer. Early Apple Macintosh computers did not have easily accessible hard drive bays (or, in the case of the Mac Plus, any hard drive bay at all), so on those models, external SCSI disks such as the Apple ProFile were the only reasonable option. Early external drives were not as compact or portable as their modern descendents. [1] [2] [3]
By the end of the 20th century, internal drives became the system of choice for computers running Windows, while external hard drives remained popular for much longer on the Apple Macintosh and other professional workstations which offered external SCSI ports. Apple made such interfaces available by default from 1986 and 1998. The addition of USB and Firewire interfaces to standard personal computers led such drives to become commonplace in the PC market as well. These new interfaces supplanted the more complex and expensive SCSI interfaces, leading to standardization and cost reductions for external hard drives.
2. Structure and design
[pic]
A 6 GB Seagate Pocket hard drive with USB cable extended next to a 2 GB CompactFlash card.
The internal structure of external hard disk drives is similar to normal hard disk drives; in fact, they include a normal hard disk drive which is mounted in a disk enclosure. In a 2009 Computer Shopper comparison of 5 top external hard drives, the capacities ranged from 160 GB to 4TB and the cost per gigabyte value varied between ~0.16-0.38 USD. [4] As external hard drives retain the platters and moving heads of traditional hard drives they are much less tolerant of physical shocks than flash-based technology (a fact often overlooked by consumers lulled into a false sense of ruggedness by rubberised styling). [5] Larger models often include full-sized 3.5" PATA or SATA desktop hard drives, are available in the same size ranges, and generally carry a similar cost. More pricey models, especially drives with biometric security or multiple interfaces, generally cost considerably more per gigabyte. Smaller, portable 2.5" drives intended for laptop and embedded devices are slightly more expensive in cost per GB compared with larger capacity 3.5" drives. Small MP3 players, previously built around mechanical hard drive technology are now primarily solid state CompactFlash based devices.
3. Compatibility
Modern external hard drives are compatible with all operating systems supporting the relevant interface standards they operate with, such as USB MSC or IEEE1394. These standards are supported by all major modern server and desktop operating systems and many embedded devices. Obsolete systems such as Windows 98 (original edition) [6] , Windows NT (any version before Windows 2000), old versions of Linux (older than kernel 2.4), or Mac OS 8.5.1 or older do not support them out-of-the-box, but may depend on later updates or third party drivers.
Spindle (computer)
In computer jargon, the spindle of a hard disk is the spinning axle on which the platters are mounted.
In storage engineering, the physical disk drive is often called a "spindle", referencing the spinning parts which limit the device to a single I/O operation at a time and making it the focus of Input/Output scheduling decisions. The only way to execute more than one disk operation at a time is to add more than one "spindle"; a larger number of independantly seeking disks increases parallelism.
[pic]
Solid-state drive (1/3)
[pic]
This article is about flash-based, DRAM-based and other solid-state drives. For other flash-based solid-state storage, see USB flash drive. For software-based secondary storage, see RAM disk.
[pic]
PCI attached IO Accelerator SSD
[pic]
PCI-E / DRAM / NAND based SSD
A solid-state drive (SSD) is a data storage device that uses solid-state memory to store persistent data. An SSD emulates a hard disk drive interface, thus easily replacing it in most applications. An SSD using SRAM or DRAM (instead of flash memory) is often called a RAM-drive, not to be confused with a RAM disk. Recently, NAND based flash memory has become the standard for most SSD's.
[pic]
An SSD in standard 2.5-inch (64 mm) form-factor
The original usage of the term "solid-state" (from solid-state physics) refers to the use of semiconductor devices rather than electron tubes but, in the present context, has been adopted to distinguish solid-state electronics from electromechanical devices. With no moving parts, solid-state drives are less fragile than hard disks and are also silent (unless a cooling fan is used); as there are no mechanical delays, they usually enjoy low access time and latency.
[pic]
DDR SDRAM based SSD
Contents:
1. Development
2. Architecture and function
3. Comparison of SSD with hard disk drives
4. Commercialization
5. Applications
6. See also
7. References
8. External links
1. Development
The first ferrite memory SSD devices, or auxiliary memory units as they were called at the time, emerged during the era of vacuum tube computers. But with the introduction of cheaper drum storage units, their use was discontinued. Later, in the 1970s and 1980s, SSDs were implemented in semiconductor memory for early supercomputers of IBM, Amdahl and Cray; [1] however, the prohibitively high price of the built-to-order SSDs made them quite seldom used.
In 1978 StorageTek developed the first modern type of solid-state drive. On the same year, Texas Memory Systems introduced a 16-kilobyte RAM solid-state drive to be used by oil companies for seismic data acquisition [2] . In the mid-1980s Santa Clara Systems introduced BatRam, an array of 1 megabit DIP RAM Chips and a custom controller card that emulated a hard disk. The package included a rechargeable battery to preserve the memory chip contents when the array was not powered. The Sharp PC-5000, introduced in 1983, used 128 kilobyte (128 KB) solid-state storage cartridges, containing bubble memory. 1987 saw the entry of EMC Corporation into the SSD market, with drives introduced for the mini-computer market. However, EMC exited the business soon after [3] .
RAM "disks" were popular as boot media in the 1980s when hard drives were expensive, floppy drives were slow, and a few systems, such as the Amiga series, the Apple IIgs, and later the Macintosh Portable, supported such booting. Tandy MS-DOS machines were equipped with DOS and DeskMate in ROM, as well. At the cost of some main memory, the system could be soft-rebooted and be back in the operating system in mere seconds instead of minutes. Some systems were battery-backed so contents could persist when the system was shut down.
1. 1. Intermediate
In 1995 M-Systems introduced flash-based solid-state drives. (SanDisk acquired M-Systems in November 2006.) Since then, SSDs have been used successfully as hard disk drive replacements by the military and aerospace industries, as well as other mission-critical applications. These applications require the exceptional mean time between failures (MTBF) rates that solid-state drives achieve, by virtue of their ability to withstand extreme shock, vibration and temperature ranges.
In 2008 low end netbooks appeared with SSDs. In 2009 SSDs began to appear in laptops. [4] [5]
Enterprise Flash drives (EFDs) are designed for applications requiring high I/O performance (Input/Output Operations Per Second), reliability and energy efficiency.
1. 2. Contemporary
At Cebit 2009, OCZ demonstrated a 1 TB flash SSD using a PCI Express x8 interface. It achieves a maximum write speed of 654MB/s and maximum read speed of 712MB/s. [6]
On March 2, 2009, Hewlett-Packard announced the HP StorageWorks IO Accelerator, the world's first enterprise flash drive especially designed to attach directly to the PCI fabric of a blade server. The mezzanine card, based on Fusion-io's ioDrive technology, serves over 100,000 IOPS and up to 800MB/s of bandwidth. HP provides the IO Accelerator in capacities of 80GB, 160GB and 320GB. [7]
2. Architecture and function
An SSD is commonly composed of DRAM volatile memory or primarily NAND flash non-volatile memory. [8]
2. 1. Flash drives
Most SSD manufacturers use non-volatile flash memory to create more rugged and compact devices for the consumer market. These flash memory-based SSDs, also known as flash drives, do not require batteries. They are often packaged in standard disk drive form factors (1.8-, 2.5-, and 3.5-inch). In addition, non-volatility allows flash SSDs to retain memory even during sudden power outages, ensuring data persistence. Flash memory SSDs are slower than DRAM SSDs and some designs are slower than even traditional HDDs on large files, but flash SSDs have no moving parts and thus seek times and other delays inherent in conventional electro-mechanical disks are negligible.
SSD Components:
• Controller: Includes the electronics that bridge the NAND memory components to the SSD I/O interfaces. The controller is an embedded processor that executes firmware-level software and is one of the most important factors of SSD performance. [9]
• Cache: A flash-based SSD uses a small amount of DRAM as a cache, similar to the cache in Hard disk drives. A directory of block placement and wear leveling data is also kept in the cache while the drive is operating.
• Energy storage: Another component in higher performing SSDs is a capacitor or some form of batteries. These are necessary to maintain data integrity such that the data in the cache can be flushed to the drive when power is dropped; some may even hold power long enough to maintain data in the cache until power is resumed.
The performance of the SSD can scale with the number of parallel NAND flash chips used in the device. A single NAND chip is relatively slow, due to narrow (8/16 bit) asynchronous IO interface, and additional high latency of basic IO operations (typical for SLC NAND - ~25 μs to fetch a 4K page from the array to the IO buffer on a read, ~250 μs to commit a 4K page from the IO buffer to the array on a write, ~2 ms to erase a 256 KB block). When multiple NAND devices operate in parallel inside an SSD, the bandwidth scales, and the high latencies can be hidden, as long as enough outstanding operations are pending and the load is evenly distributed between devices.
Micron/Intel SSD made faster flash drives by implementing data striping (similar to RAID0) and interleaving. This allowed creation of ultra-fast SSDs with 250 MB/s effective read/write. [10]
2. 1. 1. SLC versus MLC
Lower priced drives usually use multi-level cell (MLC) flash memory, which is slower and less reliable than single-level cell (SLC) flash memory. [11] [12] This can be mitigated or even reversed by the internal design structure of the SSD, such as interleaving, changes to writing algorithms [12] , and more excess capacity for the wear-leveling algorithms to work with.
2. 2. DRAM based drive
See also: I-RAM and Hyperdrive (storage)
SSDs based on volatile memory such as DRAM are characterized by ultrafast data access, generally less than 10 microseconds, and are used primarily to accelerate applications that would otherwise be held back by the latency of Flash SSDs or traditional HDDs. DRAM-based SSDs usually incorporate either an internal battery or an external AC/DC adapter and backup storage systems to ensure data persistence while no power is being supplied to the drive from external sources. If power is lost, the battery provides power while all information is copied from random access memory (RAM) to back-up storage. When the power is restored, the information is copied back to the RAM from the back-up storage, and the SSD resumes normal operation. (Similar to the hibernate function used in modern operating systems.)
These types of SSD are usually fitted with the same type of DRAM modules used in regular PCs and servers, allowing them to be swapped out and replaced with larger modules.
A secondary computer with a fast network or (direct) Infiniband connection can be used as a RAM-based SSD. [13]
[pic]
Open casing of 2.5-inch traditional hard disk drive (left) and solid-state drive (center)
DRAM based solid-state drives are especially useful on computers that already have the maximum amount of supported RAM. For example, some computer systems built on the x86-32 architecture can effectively be extended beyond the 4 GB limit by putting the paging file or swap file on a SSD. Owing to the bandwidth bottleneck of the bus they connect to, DRAM SSDs cannot read and write data as fast as main RAM can, but they are far faster than any mechanical hard drive. Placing the swap/scratch files on a RAM SSD, as opposed to a traditional hard drive, therefore can increase performance significantly.
3. Comparison of SSD with hard disk drives
A comparison (with benchmarks) of SSDs, Secure Digital High Capacity (SDHC) drives, and hard disk drives (HDDs) is given in the reference. [14]
[pic]
The disassembled components of a hard disk drive (left) and of the PCB and components of a solid-state drive (right)
Comparisons reflect typical characteristics, and may not hold for a specific device.
3. 1. Advantages
• Faster start-up because no spin-up is required.
• Fast random access because there is no "seeking" motion as is required with rotating disk platters and the read/write head and head-actuator mechanism [15]
o Low read latency times for RAM drives. [16] In applications where hard disk seeks are the limiting factor, this results in faster boot and application launch times (see Amdahl's law). [17]
o Consistent read performance because physical location of data is irrelevant for SSDs. [18]
o File fragmentation has negligible effect, again because data access degradation due to fragmentation is primarily due to much greater disk head seek activity as data reads or writes are spread across many different locations on disk - and SSDs have no heads and thus no delays due to head motion (seeking).
• Silent operation due to the lack of moving parts.
• Low capacity flash SSDs have a low power consumption and generate little heat when in use.
• High mechanical reliability, as the lack of moving parts almost eliminates the risk of "mechanical" failure.
• Ability to endure extreme shock, high altitude, vibration and extremes of temperature. [19] [20] This makes SSDs useful for laptops, mobile computers, and devices that operate in extreme conditions (flash). [17]
• For low-capacity SSDs, lower weight and size: although size and weight per unit storage are still better for traditional hard drives, and microdrives allow up to 20 GB storage in a CompactFlash form-factor. As of 2008 SSDs up to 256 GB are lighter than hard drives of the same capacity. [19]
• Flash SSDs have twice the data density of HDDs (so far, with very recent and major developments of improving SSD densities), even up to 1TB disks [21] [22] (currently more than 2TB is atypical even for HDDs) [23] ). One example of this advantage is that portable devices such as a smartphone may hold as much as a typical person's desktop PC.
• Failures occur less frequently while writing/erasing data, which means there is a lower chance of irrecoverable data damage. [24]
• Defragmenting the SSD is unnecessary. Since SSDs are random access by nature and can perform parallel reads on multiple sections of the drive (as opposed to a HDD, which requires seek time for each fragment, assuming a single head assembly), a certain degree of fragmentation is actually better for reads, and wear leveling intrinsically induces fragmentation. [25] In fact, defragmenting a SSD is harmful since it adds wear to the SSD for no benefit. [26]
• Can also be configured to smaller form factors and reduced weight. [27]
3. 2. Disadvantages
• Flash-memory drives have limited lifetimes and will often wear out after 1,000,000 to 2,000,000 write cycles (1,000 to 10,000 per cell) for MLC, and up to 5,000,000 write cycles (100,000 per cell) for SLC. [28] [29] [30] [31] Special file systems or firmware designs can mitigate this problem by spreading writes over the entire device, called wear leveling. [32]
• Wear leveling used on flash-based SSDs has security implications. For example, encryption of existing unencrypted data on flash-based SSDs cannot be performed securely due to the fact that wear leveling causes new encrypted drive sectors to be written to a physical location different from their original location—data remains unencrypted in the original physical location. It is also impossible to securely wipe files by overwriting their content on flash-based SSDs.[citation needed] However, drives that support the ATA command TRIM allow for secure file deletion, as deleted blocks are cleaned in the background before writes [33] .
• As of early-2010, SSDs are still more expensive per gigabyte than hard drives. Whereas a normal flash drive is US$2 per gigabyte, hard drives are around US$0.10 per gigabyte for 3.5", or US$0.20 for 2.5".
• The capacity of SSDs is currently lower than that of hard drives. However, flash SSD capacity is predicted to increase rapidly, with drives of 1 TB already released for enterprise and industrial applications. [22] [34] [35] [36] [37]
• Asymmetric read vs. write performance can cause problems with certain functions where the read and write operations are expected to be completed in a similar timeframe. SSDs currently have a much slower write performance compared to their read performance. [38]
• Similarly, SSD write performance is significantly impacted by the availability of free, programmable blocks. Previously written data blocks that are no longer in use can be reclaimed by TRIM; however, even with TRIM, fewer free, programmable blocks translates into reduced performance. [39]
• As a result of wear leveling and write combining, the performance of SSDs degrades with use. [40] [41]
• SATA-based SSDs generally exhibit much slower write speeds. As erase blocks on flash-based SSDs generally are quite large (e.g. 0.5 - 1 megabyte), [11] they are far slower than conventional disks during small writes (write amplification effect) and can suffer from write fragmentation. [42] Modern PCIe SSDs however have much faster write speeds than previously available.
• DRAM-based SSDs (but not flash-based SSDs) require more power than hard disks, when operating; they still use power when the computer is turned off, while hard disks do not. [43]
4. Commercialization
4. 1. Cost and capacity
Until recently,[when?] flash based solid-state drives were too costly for widespread use in mobile computing.[citation needed] As flash manufacturers transition from NOR flash to single-level cell (SLC) NAND flash and most recently to multi-level cell (MLC) NAND flash to maximize silicon die usage and reduce associated costs, "solid-state disks" are now being more accurately renamed "solid-state drives" - they have no disks but function as drives - for mobile computing in the enterprise and consumer electronics space. This technological trend is accompanied by an annual 50% decline in raw flash material costs, while capacities continue to double at the same rate. As a result, flash-based solid-state drives are becoming increasingly popular in markets such as notebook PCs and sub-notebooks for enterprises, Ultra-Mobile PCs (UMPC), and Tablet PCs for the healthcare and consumer electronics sectors. Major PC companies have now started to offer such technology.
Availability
Solid-state drive (SSD) technology has been marketed to the military and niche industrial markets since the mid-1990s[citation needed].
[pic]
CompactFlash card used as SSD
Along with the emerging enterprise market, SSDs have been appearing in ultra-mobile PCs and a few lightweight laptop systems, adding significantly to the price of the laptop, depending on the capacity, form factor and transfer speeds. As of 2008 some manufacturers have begun shipping affordable, fast, energy-efficient drives priced at $350 to computer manufacturers.[citation needed] For low-end applications, a USB flash drive may be obtained for $10 to $100 or so, depending on capacity, or a CompactFlash card may be paired with a CF-to-IDE or CF-to-SATA converter at a similar cost. Either of these requires that write-cycle endurance issues be managed, either by not storing frequently written files on the drive, or by using a Flash file system. Standard CompactFlash cards usually have write speeds of 7 to 15 megabytes per second while the more expensive upmarket cards claim speeds of up to 40 MB/s.
One of the first mainstream releases of SSD was the XO Laptop, built as part of the 'One Laptop Per Child' project. Mass production of these computers, built for children in developing countries, began in December 2007. These machines use 1024 MiB SLC NAND flash as primary storage which is considered more suitable for the harsher than normal conditions in which they are expected to be used. Dell began shipping ultra-portable laptops with SanDisk SSDs on April 26, 2007. [4] Asus released the Eee PC subnotebook on October 16, 2007, and after a successful commercial start in 2007, it was expected to ship several million PCs in 2008, with 2, 4 or 8 gigabytes of flash memory. [44] On January 31, 2008, Apple Inc. released the MacBook Air, a thin laptop with optional 64 GB SSD. The Apple store cost was $999 more for this option, as compared to that of an 80 GB 4200 rpm Hard Disk Drive. [5] Another option—Lenovo ThinkPad X300 with a 64Gbyte SSD—was announced by Lenovo in February 2008, [45] and is, as of 2008, available to consumers in some countries. On August 26, 2008, Lenovo released ThinkPad X301 with 128GB SSD option which adds approximately $200 US.
[pic]
The Mtron SSD
In late 2008, Sun released the Sun Storage 7000 Unified Storage Systems (codenamed Amber Road), which use both solid state drives and conventional hard drives to take advantage of the speed offered by SSDs and the economy and capacity offered by conventional hard disks. [46]
Dell began to offer optional 256 GB solid state drives on select notebook models in January 2009.
In May 2009 Toshiba launched a laptop with a 512 GB SSD [47] [48] .
In December 2009, Micron Technology announced the world's first SSD using a 6Gbps SATA interface. [49]
As of April 13, 2010, Apple's MacBook and MacBook Pro lines carry optional solid state hard drives of up to 512 GB at an additional cost.
4. 3. Quality and performance
SSD is a rapidly developing technology. A January 2009 review of the market by technology reviewer Tom's Hardware concluded that comparatively few of the tested devices showed acceptable I/O performance, with several disappointments, [50] and that Intel (who make their own SSD chipset) still produces the best performing SSD as of this time; a view also echoed by Anandtech. [51] In particular, operations that require many small writes, such as log files, are particularly badly affected on some devices, potentially causing the entire host system to freeze for periods of up to one second at a time. [52]
According to Anandtech, this is due to controller chip design issues with a widely used set of components, and at least partly arises because most manufacturers are memory manufacturers only, rather than full microchip design and fabrication businesses — they often rebrand others' products, [53] inadvertently replicating their problems. [54] Of the other manufacturers in the market, Memoright, Mtron, OCZ, Samsung and Soliware were also named positively for at least some areas of testing.
The overall conclusion by Tom's Hardware as of early 2009 was that "none of the [non-Intel] drives were really impressive. They all have significant weaknesses: usually either low I/O performance, poor write throughput or unacceptable power consumption". [50]
Performance of flash SSDs are difficult to benchmark. In a test done by Xssist, using IOmeter, 4KB RANDOM 70/30 RW, queue depth 4, the IOPS delivered by the Intel X25-E 64GB G1 started around 10000 IOPs, and dropped sharply after 8 minutes to 4000 IOPS, and continued to decrease gradually for the next 42 minutes. IOPS vary between 3000 to 4000 from around the 50th minutes onwards for the rest of the 8+ hours test run. [55]
OCZ has recently unveiled OCZ Vertex 2 Pro which is currently the fastest MLC SSD with a Sandforce Controller onboard performing more or less as the Intel X25-E series SSDs. [56]
An April 2010 test of seven SSDs by Logan Harbaugh, which appeared in Network World identified a performance problem with consumer grade SSDs. Dubbed the ``write cliff`` effect, consumer grade drives showed dramatic variations in response times under sustained write conditions. This dropoff occurred once the drive was filled for the first time and the drive's internal garbage collection and wear-leveling routines kicked in. [57]
This only affected write performance with consumer grade drives. Enterprise grade drives avoid this problem by overprovisioning, and by employing wear-leveling algorithms that only move data around when the drives are not being heavily utilized. [57]
5. Applications
Flash-based Solid-state drives can be used to create network appliances from general-purpose PC hardware. A write protected flash drive containing the operating system and application software can substitute for larger, less reliable disk drives or CD-ROMs. Appliances built this way can provide an inexpensive alternative to expensive router and firewall hardware.
5. 1. Hybrid drive
Main article: Hybrid drive
A hybrid disk uses an SSD as a buffer for a larger hard disk drive. The hard disk may be spun down more of the time if data is available in the SSD.
NAND Flash based SSDs offer a potential power saving; however, the typical pattern of usage of normal operations result in cache misses in the NAND Flash as well leading to continued spin of the drive platter or much longer latency if the drive needed to spin up.[citation needed] These devices would be slightly more energy efficient but could not prove to be any better in performance.[citation needed]
DRAM-based SSDs may also work as a buffer cache mechanism (see hybrid RAM drive). When data is written to memory, the corresponding block in memory is marked as dirty, and all dirty blocks can be flushed to the actual hard drive based on the following criteria:
• Time (e.g., every 10 seconds, flush all dirty data);
• Threshold (when the ratio of dirty data to SSD size exceeds some predetermined value, flush the dirty data);
• Loss of power/computer shutdown.
5. 2. Microsoft Windows and exFAT
Versions of Windows prior to Windows 7 are optimized for hard disk drives rather than SSDs. [58] [59] Windows Vista includes ReadyBoost to exploit characteristics of USB-connected flash devices. Windows 7 is optimized for SSDs [60] [61] as well as for hard disks. It includes support for the TRIM command.
Microsoft's exFAT file system is optimized for SSDs. [62] According to Microsoft, "The exFAT file system driver adds increased compatibility with flash media. This includes the following capabilities: Alignment of file system metadata on optimal write boundaries of the device; Alignment of the cluster heap on optimal write boundaries of the device." [63] Support for the new file system is included with Vista Service Pack 1 and Windows 7 and is available as an optional update for Windows XP. [63]
5. 3. ZFS
Solaris, as of 10u6 (released in October 2008), and recent versions of OpenSolaris and Solaris Express Community Edition on which OpenSolaris is based, can use SSDs as a performance booster for ZFS. There are two available modes—using an SSD for the ZFS Intent Log (ZIL), which is used every time a synchronous write to the disk occurs, or for the L2ARC (Level 2 Adaptive Replacement Cache), which is used to cache data for reading. When used either alone or in combination, large increases in performance are generally seen. [64
Write protection
[pic]
Write protection is any physical mechanism that prevents modification or erasure of valuable data on a device. Most commercial software, audio and video is sold pre-protected.
Contents:
1. Examples of Write Protection
2. Write Blocking
3. References
1. Examples of Write Protection
• IBM 1/2 inch magnetic tape reels, introduced in the 1950s, had a circular groove on one side of the reel, into which a soft plastic ring had to be placed in order to write on the tape. (“No ring, no write.”)
• Audio cassettes and VHS videocassettes have tabs on the top/rear edge that can be broken off (uncovered = protected).
• 8 and 5¼ inch floppies can have, respectively, write-protect and write-enable notches on the right side (8″: punched = protected; 5¼″: covered/not present = protected). A common practice with single-sided floppies was to punch a second notch on the opposite side of the disk to enable use of both sides of the media, creating a flippy disk, so called because one originally had to flip the disk over to use the other side.
• 3½ inch floppy disks have a sliding tab in a window on the right side (open = protected).
• Iomega Zip disks were write protected using the IomegaWare software.
• Syquest EZ-drive (135 & 250MB) disks were write protected using a small metal switch on the rear of the disk at the bottom.
• 8mm, Hi8, and DV videocassettes have a sliding tab on the rear edge.
• Iomega ditto tape cartridges had a small sliding tab on the top left hand corner on the front face of the cartridge.
• USB flash drives and most other forms of solid state storage sometimes have a small switch.
• Secure Digital (SD) cards have a write-protect tab on the left side.
These mechanisms are intended to prevent only accidental data loss or attacks by computer viruses. A determined user can easily circumvent them either by covering a notch with adhesive tape or by creating one with a punch as appropriate, or sometimes by physically altering the media transport to ignore the write-protect mechanism.
[pic]
IBM tape reel with white write ring in place, and an extra yellow ring.
Write-protection is typically enforced by the hardware. In the case of computer devices, attempting to violate it will return an error to the operating system while some tape recorders physically lock the record button when a write-protected cassette is present.
[pic]
From top to bottom: an unprotected Type I cassette, an unprotected Type II, an unprotected Type IV, and a protected Type IV.
2. Write Blocking
Write blocking, a subset of write protection, is a technique used in computer forensics in order to maintain the integrity of data storage devices. By preventing all write operations to the device, e.g. a hard drive, it can be ensured that the device remains unaltered by data recovery methods.
[pic]
A sheet of 5-1/4" floppy disk write protect tabs.
Hardware write blocking was invented by Mark Menz and Steve Bress (US patent 6,813,682 and EU patent EP1,342,145)
Both hardware and software write-blocking methods are used, however software blocking is generally not as reliable.
................
................
In order to avoid copyright disputes, this page is only a partial summary.
To fulfill the demand for quickly locating and searching documents.
It is intelligent file search solution for home and business.
Related download
- latches flip flops and memory
- bt dashboard installation guide veterans affairs
- windows access wssb
- unit 14 installing and maintaining hardware
- secondary memory meble do salonu
- lab 5 computer cycling
- university of california at berkeley
- computer architecture project report
- answers to end of ucr computer science and engineering
Related searches
- secondary succession biology
- hong kong secondary school ranking
- primary and secondary succession activity
- hk secondary school banding
- missouri department of secondary education
- missouri dept of secondary education
- primary and secondary succession examples
- primary and secondary succession pdf
- secondary education in australia
- secondary school in hong kong
- secondary schools in dubai
- secondary metabolite list