Chapter I



Chapter I Introduction to Computer ScienceChapter I Topics1.1How Do Computers Work? 1.2Messages with Morse Code1.3Electronic Memory1.4Memory and Secondary Storage1.5Hardware and Software1.6A Brief History of Computers 1.7What Is Programming?1.8A Brief History of Programming Languages1.9Summary1.1How Do Computers Work? Human beings do not spend money on expensive items unless such items somehow improve human capabilities. Cars are great. They move faster than humans, they do not get tired, and they keep you comfortable in bad weather. They are expensive, but the expense is worth it. Computers process information and do this processing better in many areas compared to human beings. The three areas in which a computer is superior to a human being are shown in figure 1.1.Figure 1.13 Areas Where Computers are Superior to Human Beings Computers are faster Computers are more accurate Computers do not forget You may be quick to accept that computers are faster, but you are not so sure about the other two. Too often you have heard the term computer error and you also remember hearing about data that was lost in the computer.Well, let us start our computer lesson right now by clearing up some basic myths. Computers do not make errors. Sure, it is possible for a computer to give erroneous information. However, the computer is nothing but a stupid machine that faithfully, and always accurately, follows instructions. If the instructions given by a human being to a computer are faulty, then the computer will produce errors. At the same time, many so-called computer errors are caused by sloppy data entry. A person who receives an outrageous electric bill is told that the computer created an erroneous bill. True, the computer printed the bill, but not until a data-entry clerk had slipped an extra zero in the amount of electricity used for the previous month.Perhaps you are still not convinced. After all, what about the situation when a computer breaks down? Won’t that cause problems? Broken computers will certainly cause problems. However, your computer will not work at all. Your computer applications will not work and you are stuck, but the computer does not suddenly start adding 2 + 2 = 5.You may also have heard that people lose their computer information because of problems with disk drives. Once again this happens, but computer users who keep their computers and diskettes in a proper environment, along with a sensible backup system, do not have such problems.With everything that we see computers do today, it is not surprising that some people think that computers are also more intelligent than human beings. Yes, computers can do amazing things, but what must be understood before we go on is that COMPUTERS ARE STUPID. They have no intelligence. They also have no creativity. All a computer can do is to follow your directions. Well, you give up. No point arguing with a stupid book that cannot hear you. Fine, the computer is faster, the computer is more accurate, and sure the computer does not forget. But how is this managed electronically? You know that electricity is incredibly fast, and you have every confidence that the flip of a switch turns on a light or a vacuum cleaner. Today’s computers are electronic. Just how does electricity store information? How does a computer perform computations? How does a computer translate keyboard strokes into desirable output? These are all good questions and an attempt will be made here to explain this in a manner that does not become too technical.1.2 Messages with Morse CodeUnless you are a Boy Scout or Navy sailor, you probably have little experience with Morse Code. Today’s communication is so much better than Morse code, but there was a time when Morse code was an incredible invention and allowed very rapid electronic communication.Imagine the following situation. Somehow, you have managed to connect an electric wire between the home of your friend and yourself. You both have a buzzer and a push button. Each of you is capable of “buzzing” the other person, and the buzzer makes a noise as long as the button is pressed. You have no money for a microphone, you have no amplifier and you have no speakers. Furthermore, your mean parents have grounded you to your room without use of the telephone. But you do have your wires, your buzzers and your buttons. Can you communicate? You certainly can communicate if you know Morse code or develop a similar system. (We are talking Leon Schram in 1958) Morse code is based on a series of short and long signals. These signals can be sounds, lights, or other symbols, but you need some system to translate signals into human communication. Morse code creates an entire set of short and long signal combinations for every letter in the alphabet and every number. Usually, a long signal is three times as long as a short signal. In the diagram shown in figure 1.2 a long signal is shown with a bar and a short signal is indicated by a circle. Figure 1.2You, and your buddy, can now send messages back and forth by pressing the buzzer with long and short sounds. Letters and numbers can be created this way. For instance the word EXPO would be signaled as follows: The secret of Morse code is the fact that electricity can be turned on, and it can be turned off. This means that a flashlight can send long and short beams of light and a buzzer can send long and short buzzing sounds. With an established code, such as Morse code, we can now send combinations of long and short impulses electronically. Very, very brief pauses occur between the shorts and longs of a letter. Longer pauses indicate the separation between letters. This basically means that electronically we can send human messages by turning electricity on and off in a series of organized pulses. Does this mean that Samuel Morse invented the computer? No, he did not get credit for starting the computer revolution, but it does serve as a simple example to illustrate how electricity can process letters by translating on and off situations into letters and numbers.1.3 Electronic MemoryFine, Morse code explains how letters can be translated into electronic impulses. This explains electronic communication, but Morse code does not store any letters. Morse code signals are sent and they are gone, followed by the next signal. If you doze off, you miss the signal and it is too bad. Luckily, somebody became clever and a special device was invented that printed dots (short signals) and dashes (long signals) on a paper tape as the message was received. Now that explains a paper memory, but we still have not achieved electronic memory.Suppose you line up a series of light bulbs. How about picking eight bulbs? Each light bulb is capable of being turned on and off. With these 8 light bulbs we can create 28 or 256 different combinations. Two tables are shown in figure 1.3 below. The first diagram shows on and off. The second diagram uses 1 and 0. In Computer Science, 1 means on and 0 means off.Figure 1.3offonoffoffonoffoffoff01001000In this particular example, the second and fifth bulbs are on, and all the other bulbs are off. This represents only one of the 256 different combinations. Figure 1.6 will show three more combinations. It certainly is not Morse code, but by using the Morse code example, we can imagine that each of the 256 combinations is assigned to a letter, a number, or some other type of character.Before we go on, we need to truly understand our own number system. The number system that we use is called the decimal number system or base-10. It is called “base-10” because it has 10 digits (0 – 9). Rumor has it that people developed a base-10 system, because of our ten fingers. Aside from 10 digits, there is something else that is significant about base-10 numbers. Every digit in a base-10 number represents a multiple of a power of 10. Consider the base-10 number 2,345,678 as it is shown in figure 1.4:Figure 1.41061051041031021011001,000,000100,00010,0001,0001001012345678Mathematically speaking, counting and computing are possible in other bases besides base-10. The number system used by computers is the binary number system or base-2. Only the digits 0 and 1 are used. Remember that modern computers use electricity, which is either on or off. This is perfectly represented with a binary 1 or 0. The first 32 base-2 numbers, with their equivalent base-10 values, are shown in figure 1.5.Figure 1.5Base-10Base-2Base-10Base-20016100001117100012101810010311191001141002010100510121101016110221011071112310111810002411000910012511001101010261101011101127110111211002811100131101291110114111030111101511113111111Now consider these three “8-light-bulbs” combinations in figure 1.6. Each of these combinations of on and off light bulbs can be viewed as a base-2 number. In the same way that every digit in a base-10 number represents a multiple of a power of 10, every column in a base-2 number represents a power of 2. The math is identical. The only thing that changed is the base. Figure 1.6272625242322212012864321684210100000101000001 (base-2) = 65 (base 10)Figure 1.6 Continued 272625242322212012864321684210100001001000010 (base-2) = 66 (base 10)272625242322212012864321684210100001101000011 (base-2) = 67 (base 10)You are looking at A, B, C on the majority of today’s personal computers. By convention, at least the convention of the American Standard Code for Information Interchange (ASCII), number 65 is used to store the letter A. Combinations 0 through 127 are used for the standard set of characters. The second group, from 128 through 255, is used for the extended set of characters. Now we are finally getting somewhere. We can use eight lights for each character that needs to be stored. All we have to do is place thousands of light bulbs in a container and you can store bunches of information by using this special binary code. There is another big bonus. Mathematically speaking, computations can be performed in any base. With our clever binary system, we now have a means to store information and make electronic calculations possible as well. We have now learned that information can be stored in base-2 numbers. Base-2 numbers can store characters by using a system that equates numbers like the base-2 equivalent of 65 to A. At the same time, mathematical operations now become an electronic reality. In other words, the magic of on/off switches allows both the electronic storing of information as well as electronic computation.It should be noted that in a first year computer science class, students are not required to be able to convert numbers between bases. You will not be expected to figure out that 201 in base-10 converts to 11001001 in base-2 or vice-versa. However, if you are planning a career in technology, especially in the area of networking, then it is definitely an essential skill.We can also add some terminology here. A single bulb can be on or off and this single light represents a single digit in base-2, called a binary digit, which is abbreviated as bit. We also want to give a special name to the row of eight light bulbs (bits) that make up one character. This row shall be called a byte. Keep in mind that byte is not plural for bit. There is one problem with ASCII’s system of storing each character in a single byte. You only have access to 256 different combinations or characters. This may be fine in the United States, but it is very inadequate for the international community. Unicode is now becoming very popular and this code stores characters in 2 bytes. The result is 65,536 different possible characters. Java has adopted Unicode, as have many technical organizations. The smaller ASCII code is a subset of Unicode.Bits, Bytes and CodesBit is a binary digit that is either 0 (off) or 1 (on).1 Byte = 8 bits1 Nibble = 4 bits (? a byte)1 Byte has 28 or 256 different numerical combinations.2 Bytes has 216 or 65,536 different numerical combinations.ASCII uses one byte to store one character.Unicode uses two bytes to store one character.Early computers did in fact use one vacuum tube for each bit. Very large machines contained thousands of vacuum tubes with thousands of switches that could change the status of the tubes. Miles of wires connected different groups of vacuum tubes to organize the instructions that the computer had to follow. Early computer scientists had to walk around giant computers and physically connect wires to different parts of the computer to create a set of computer instructions. The incredible advances in computer technology revolve around the size of the bit. In the forties, a bit was a single vacuum tube that burned out very rapidly. Soon large vacuum tubes were replaced by smaller, more reliable, vacuum tubes. A pattern was set that would continue for decades. Small is not only smaller, it is also better. The small tube gave place to the pea-sized transistor, which was replaced by the integrated circuit. Bits kept getting smaller and smaller. Today, a mind-boggling quantity of bits fits on a single microchip. This is by no means a complete story of the workings of a computer. Very, very thick books exist that detail the precise job of every component of a computer. Computer hardware is a very complex topic that is constantly changing. Pick up a computer magazine, and you will be amazed by the new gadgets and the new computer terms that keep popping up. The intention of this brief introduction is to help you understand the essence of how a computer works. Everything revolves around the ability to process enormous quantities of binary code, which is capable of holding two different states: 1 and 0.1.4 Memory and Secondary StorageElectronic appliances used to have complex – cables everywhere – dusty interiors. Repairing such appliances could be very time consuming. Appliances, computers included, still get dusty on the inside, but all the complex wires and vacuum tubes are gone. You will now see series of boards that all have hundreds and thousands of coppery lines crisscrossing everywhere. If one of these boards is bad, it is pulled out and replaced with an entire new board. What used to be loose, all over the place, vacuum tubes, transistors, resistors, capacitors and wires, is now neatly organized on one board. Electronic repair has become much faster and cheaper in the process.In computers the main board with all the primary computer components is called the motherboard. Attached to the motherboard are important components that store and control information. These components are made out of chips of silicon. Silicon is a semiconductor, which allows precise control of the flow of electrons. Hence we have the names memory chip, processing chip, etc. We are primarily concerned with the RAM chip, the ROM chip and the CPU chip.I mentioned earlier that information is stored in a binary code as a sequence of ones and zeroes. The manner in which this information is stored is not always the same. Suppose now that you create a group of chips and control the bits on these chips in such a way that you cannot change their values. Every bit on the chip is fixed. Such a chip can have a permanent set of instructions encoded on it. These kinds of chips are found in cars, microwaves, cell phones and many electronic appliances that perform a similar task day after puters also have chips that store permanent information. Such chips are called Read Only Memory chips or ROM chips. There is a bunch of information in the computer that should not disappear when the power is turned off, and this information should also not be altered if the computer programmer makes some mistake. A ROM chip can be compared to a music CD. You can listen to the music on the CD, but you cannot alter or erase any of the recordings.Another type of chip stores information temporarily. Once again, information is stored in many bytes, each made up of eight bits, but this information requires a continuous electric current. When the power is gone, so is the information in these chips. Computer users also can alter the information of these chips when they use the computer. Such chips can store the data produced by using the computer, such as a research paper or it can store the current application being used by the computer. The name of this chip is Random Access Memory chip or RAM chip. Personally, I am not happy with that name. I would have preferred something that implies that the chip is Read and Write, but then nobody asked for my opinion when memory chips were puter terminology has actually borrowed terms from the Metric System. We all remember that a kilometer is 1000 meters and a kilogram is 1000 grams. This is because the Metric System prefix kilo means 1000. In the same way, a kilobyte is about 1000 bytes. Why did I say “about”? Remember that everything in the computer is based on powers of 2. If you are going to be really technical and picky, a kilobyte is exactly 210 or 1024 bytes. For our purposes, 1000 bytes is close enough. Other metric system prefixes are shown in figure 10.7.Figure 1.7Measuring MemoryKBKilo Byte1 thousand bytes1,000MBMega Byte1 million bytes1,000,000GBGiga Byte1 billion bytes1,000,000,000TBTera Byte1 trillion bytes1,000,000,000,000PBPeta Byte1 thousand terabytes1,000,000,000,000,000EBExa Byte1 million terabytes1,000,000,000,000,000,000ZBZetta Byte1 billion terabytes1,000,000,000,000,000,000,000YBYotta Byte1 trillion terabytes1,000,000,000,000,000,000,000,000Modern computers now have memory that is measured in gigabytes and hard drive space that is measured in terabytes. Kilobytes and megabytes are rapidly fading from the computer terminology. Your children will probably be working with petabytes and exabytes. Your grandchildren will probably be working with zetabytes and yottabytes. The most significant chunk of silicon in your computer is the CPU chip. CPU stands for Central Processing Unit and this chip is the brains of the computer. You cannot call this chip ROM or RAM. On this tiny little chip are lots of permanent instructions that behave like ROM, and there are also many places where information is stored temporarily in the manner of a RAM chip. The CPU is one busy little chip. You name it, the CPU does the job. A long list of operations could follow here but the key notion is that you understand that all the processing, calculating and information passing is controlled by the Central Processing Unit. The power of your computer, the capabilities of your computer, and the speed of your computer is based on your CPU chip more than any other computer component.Secondary StorageI just know that you are an alert student. ROM made good sense. RAM also made sense, but you are concerned. If the information in RAM is toast when you turn off the computer . . . then what happens to all the stored information, like your research paper? Oh, I underestimated your computer knowledge. You do know that we have hard drives, diskettes, zip diskettes, tapes, CDs and USB jump drives that can store information permanently. We have stored information with rust for quite some time. Did I say rust? Yes, I did. Perhaps you feel more comfortable with the term iron oxide. Tiny particles of iron oxide on the surface of a tape or floppy disk are magnetically charged positively or negatively. Saving information for later use may be a completely different process from simply storing it in memory, but the logic is still similar. Please do keep in mind that this information will not disappear when the power is turned off, but it can be easily altered. New information can be stored over the previous information. A magnetic field of some type, like a library security gate, heat in a car, dust in a closet, and peanut butter in a lunch bag can do serious damage to your information.You might be confused about the currently popular CD-ROMs. You can see that they are external to the computer, but ROM implies Read Only Memory. CDs store enormous amount of information. The information is permanent and thus behaves like ROM. When you use a CD with a computer it behaves as if you had added extra ROM to your computer internally. CDs do not use rust; they are far too sophisticated for such a crude process. The CD is coded with areas that reflect and absorb laser light. Once again we can create a code system because we have two different states, on and off.The on/off state is the driving force of the digital computer. What is digital? Look at your watch. You can see digits, and you see the precise time. There is no fractional time. A clock with hour, minute and second hands is an analog device. It measures in a continuous fashion. A measuring tape is also analog, as is a speedometer with a rotating needle. What is the beauty of digitizing something? With digital information it is possible to always make a precise copy of the original. It is easy to transfer, store and use digitized information. Entire pictures can be converted to a digitized file and used elsewhere. I am sure you have been in movie theaters where “digital” sound is advertised. So digital is the name of the game. Just remember that not all digitizing is equally fast. The internal memory of the computer is digital and it uses electronics. The access of a hard disk involves electronics, but the information is read off a disk that rotates and only one small part of the disk is “readable” at one time. Accessing a disk drive is much slower than accessing internal memory.1.5 Hardware and SoftwareComputer science, like all technical fields, has a huge library of technical terms and acronyms. Volumes can be filled with all kinds of technical vocabulary. Have no fear; you will not be exposed to volumes, but you do need some exposure to the more common terms you will encounter in the computer world. Some of these terms will be used in the following section on the history of computers.For starters, it is important that you understand the difference between hardware and software. Computer hardware refers to any physical piece of computer equipment that can be seen or touched. Essentially, hardware is tangible. Computer software, on the other hand, is intangible. Software refers to the set of computer instructions which make the computer perform a specific task. These computer instructions, or programs, are usually encoded on some storage device like a CD, jump drive or hard drive. While CDs, jump drives and hard drives are examples of tangible hardware, the programs stored on them are examples of intangible puter Hardware and Peripheral DevicesThere are big, visible hardware items that most students know because such items are difficult to miss. This type of hardware includes the main computer box, the monitor, printer, and scanner. There are additional hardware items that are not quite as easy to detect. It helps to start at the most essential computer components. There is the CPU (Central Processing Unit), which controls the computer operations. The CPU together with the primary memory storage represents the actual computer. Frequently, when people say to move the CPU to some desk, they mean the big box that contains the CPU and computer memory. This “box” is actually a piece of hardware called the system unit and it actually contains a lot more than just a bunch of memory chips. There are also many peripheral devices.What does periphery mean? It means an imprecise boundary. If the computers are located on the periphery of the classroom, then the computers are located against the walls of the classroom. Computer hardware falls into two categories. There are internal peripheral devices and external peripheral devices.External peripheral devices are located outside the computer and connected with some interface, which is usually a cable, but it can also be wireless. The first external peripheral device you see is the monitor. In the old days a monitor was called a CRT (Cathode Ray Tube). This was appropriate with the bulky monitors that looked like old televisions. Today many monitors use LCD (Liquid Crystal Display) or Plasma screens. It is common now for monitors to be 17, 24, or even 32 inches. (Right now, I am actually looking at a 60 inch LED screen as I edit the 2015 version of this chapter.) Things have changed considerably since the days of 10 inch monochrome computer monitors. Other external peripheral devices include a printer, keyboard, mouse, scanner, and jump drive. There are many internal peripheral devices that are connected to the computer inside the system unit. These devices include the disk drive, CD ROM drive, hard drive, network interface card and video puter SoftwareComputer software provides instructions to a computer. The most important aspect of this course is to learn how to give correct and logical instructions to a computer with the help of a programming language. Software falls into two categories. There is system software and application software. Usually, students entering high school are already familiar with applications software. Applications software refers to the instructions that the computer requires to do something specific for you. The whole reason why a computer exists is so that it can assist people in some type of application. If you need to write a paper, you load a word processor. If you need to find the totals and averages of several rows and columns of numbers, you load an electronic spreadsheet. If you want to draw a picture, you load a paint program. Word processors and electronic spreadsheets are the two most common applications for a computer. Currently, there are thousands of other applications available which assist people in every possible area from completing tax returns to designing a dream home to playing video games. NOTE: People often talk about the “apps” on their cell phone. App is just an abbreviation for application software. System software refers to the instructions that the computer requires to operate properly. A common term is Operating System (OS). The major operating systems are Windows, UNIX, Linux and the MAC OS. It is important that you understand the operation of your operating system. With an OS you can store, move and organize data. You can install new external devices like printers and scanners. You can personalize your computer with a desktop appearance and color selections. You can execute applications. You can install additional applications. You can also install computer protection against losing data and viruses.1.6A History of ComputersAll of the technology that you take for granted today came from somewhere. There have been many contributions to computer science, some big and some small, spanning many centuries of history. One could easily write an entire textbook just on Computer History. I am not that one. Such a textbook would do little to teach computer programming. It would also be a major snooze inducer for most teenagers. Many young people enjoy working with computers, but listening to a stimulating lecture on the history of computers is another story. It does seem odd to plunge into a computer science course without at least some reference to where did this technology come from anyway? The History of Computers will be divided into 5 eras. Each of these eras begins with a monumental invention that radically changed the way things were done and had a lasting effect on the inventions that followed.The First Era – Counting ToolsA long time ago some caveman must have realized that counting on fingers and toes was very limiting. They needed a way to represent numbers that were larger than 20. They started making marks on rocks, carving notches in bones and tying knots in rope. Eventually, mankind found more practical ways to not only keep track of large numbers, but also to perform mathematical calculations with them.The Abacus, 3000 B.C.The Abacus was originally invented in the Middle Eastern area. This rather amazing computing device is still very much used in many Asian countries today. Skilled Abacus handlers can get basic arithmetic results just about as fast as you might get with a four-function calculator.Napier Bones, 1617There is a method of multiplication called Lattice Multiplication that some find simpler than traditional multiplication for large numbers. While no one knows who actually developed this form of multiplication, it was mentioned in Arab texts as early as the 13th century. When I was in elementary school, we only learned traditional multiplication. Today, several methods are taught, including lattice multiplication.Traditional MultiplicationLattice Multiplication 65 x 23 ---- 195 1300 ---- 14956511 21 0241 81 5395A few hundred years later, John Napier, the same man who had recently invented logarithms designed a more efficient way to do lattice multiplication. He marked strips of ivory with the multiples of the digits 0-9. By placing certain strips or bones next to each other, one could do lattice multiplication and with less writing. It was also possible to use Napier’s Bones to divide and compute square roots.Slide Rule, 1622William Oughtred created the slide rule based on the recent invention of logarithms by John Napier. This device allows sophisticated mathematical calculations. The slide rule was used for centuries until it was replaced by the scientific calculator in the 1970s. The Second Era – Gear-Driven DevicesMore complicated calculations and tasks required more complicated devices. These devices have rotating gears. Since they did not use electricity, they would require some form of manual cranking in order to function. One problem with devices that have moving parts is that they wear out and break.Numerical Calculating Machine, 1642Blaise Pascal, the same mathematician who is known for Pascal’s Triangle, built the Pascaline, the first numerical calculating machine. The inner workings of this device are similar to the tumbler odometers found in old cars. It could perform addition and subtraction. Multiplication could be performed with repeated additions. Even in the early to mid-1970s, a plastic version of this device – called a Pocket Adder – was still being used. This was because a 4-function calculator could cost $150 or more back then. To put that into perspective, a plate lunch was 55 cents in 1975. By the end of the 1970s, the cost of a 4-function pocket calculator had dropped to below $10 and the Pocket Adder disappeared.Jacquard's Loom, 1805A loom is a device used to make cloth. To make plain cloth was relatively simple. To make cloth with intricate patterns was very complicated and took a great deal of time. This is because the loom itself had to be recalibrated continuously. Joseph Jacquard invented a special loom that would accept special flexible cards that are punched with information in such a manner that it is possible to program how cloth will be weaved. This did not require the continuous manual recalibration. It was even possible to make a duplicate cloth by feeding it the same cards again. It is one of the first examples of programming.Analytical Engine, 1833Charles Babbage made a machine that can read instructions from a sequence of punched cards – similar to those used in Jacquard’s Loom. While Jacquard’s Loom was a device dedicated to one specific task, Charles Babbage’s Analytical Engine was the first general purpose computing machine. Essentially, this was the first computer. For this reason, he is considered “The Father of Computers”. In the 1990s many malls had a video games store called Babbage’s which was named after him. You do not see those stores today because Babbage’s was bought out by GameStop.Programming, 1842Ada Byron, the Countess of Lovelace, was Charles Babbage’s assistant. She has the title “countess” because she is the daughter of Lord Byron and therefore is royalty. Royalty or not, she was a woman very much ahead of her time. Imagine you had the greatest video game console ever made, but you had no video games for it. A video game console with no video games is essentially a very expensive paper weight. In the same way, a computer that has no software is practically useless. Ada Byron understood this more than 170 years ago. She knew that Charles Babbage’s device required instructions – what we would today call programs or software. So, over 170 years ago, before modern computers even existed, she started designing computer programs. In so doing she developed certain programming ideas and techniques that are still used in programming languages today. She is known as “The Mother of Programming” and “The World’s First Programmer”. Today the programming language Ada is named after her.The Third Era – Electro-Mechanical DevicesThe term electro-mechanical device means the device uses electricity, but still has moving parts. These devices are not yet “fully electronic”. That is the next era. Since they do use electricity, the manual cranking is no longer needed. Since they still have moving parts, they still break down easily.Tabulating Machine, 1889The first US Census took place in 1790. Since then, every 10 years the United State Government counts all of the people in the country. This information is used for various things like determining how many representatives a state gets in the House of Representatives. It also helps determine how many schools need to be built and where. In 1880, the US Government conducted the 10th decennial census, just as it had done every decade for the past 90 years; however, this census was different. 1880 marked the beginning of the “Great Wave of Immigration”. There were many more people in the country. As a result, it took much longer to count everyone. The 1880 census was not finished until 1888. The government realized they had a major problem. The combination of normal population growth with the continued surge of immigration would mean even more people would need to be counted for the 1890 census. There was serious concern that they would not be able to finish the 1890 census by 1900.In 1889, Herman Hollerith came to the rescue with his invention of the Tabulating Machine. This machine used punch cards similar to the flexible cards used in Jacquard’s Loom, but smaller. While today you might fill in the bubbles on a SCANTRON with a #2 pencil to answer a survey, back then you would poke a hole through the punch card. Holes in different positions would indicate things like gender, number of children in the household, etc.These cards could be feed into the machine at a rate of about 1 per second. Once inserted, metal wires would come down on top of the card. Wherever the card had holes, the wires would pass through the card; touch the metal on the other side; complete a circuit; and the information would be tabulated.Herman Hollerith’s machine was a success. The 1890 census was completed in just one year. Building on his success, in 1896 Hollerith founded the Tabulating Machine Company. Hollerith’s machines were now being used all over the world. In 1911, his firm merged with three other companies to form the Computing Tabulating Recording Company. In 1924, the company was renamed International Business Machines Corporation. Differential Analyzer, 1931Harold Locke Hazen and Vannevar Bush, a professor at MIT, built a large scale computing machine capable of solving differential equations. If you have no idea what “differential equations” are, this is the type of math done in Calculus and it makes things like landing a man on the moon possible. In 1934 a model of the Differential Analyzer was made at Manchester University by Douglas Hartree and Arthur Porter. This model made extensive use of the parts from a Meccano building set. While you and your parents may have played with Lego when you were young, your grandparents and your great-grandparents may have played with Meccano or some other erector set. These were building sets that had strips of metal with pre-drilled holes. It also came with gears, screws, nuts and washers. By using Meccano, they were able to make a less expensive model of the Differential Analyzer that was "accurate enough for the solution of many scientific problems".Z3, 1941Konrad Zuse builds an electro-mechanical computer capable of automatic computations in Germany during World War II. It was the first functional, programmable, fully automatic digital computer. The Z3 was destroyed in 1943 during the Allied bombing of Berlin. Some people credit Konrad Zuse as the “inventor of the computer”.Mark I, 1944The first IBM Automatic Sequence Control Calculator (ASCC) was dubbed the Mark I by Harvard University. Technically, the first version of many devices is called the “Mark I” by its creators, implying that there will be a new and improved “Mark II” at some point in the future. Today, when most people talk about “The Mark I” they are usually referring the Harvard’s Mark-I.This electro-mechanical calculator was 51 feet long and 8 feet tall. It was the first machine that could execute long computations automatically. Put another way, it could actually process many numbers that were each up to 23 digits long.Grace Hopper, then a Navy Lieutenant, was one of the first programmers of the Mark-I. She would make many contributions to the world of computer science, so many in fact that the United States Congress allowed her to stay in the Navy past mandatory retirement age. She finally retired as an Admiral in 1986 at the age of 79.Mark II, 1947As one might expect, Harvard University eventually replaced their Mark-I with a Mark-II. While this computer was faster than the Mark-I, that alone would not get it recognition in this chapter. The Mark-II is known for something that has nothing to do with any technological milestone. On September 9, 1947 the Mark-II simply stopped working. A technician found the problem. There was a moth stuck one of the relays. In other words, there was a bug in the computer. He then took a pair of tweezers and removed the moth. He debugged the computer. The actual moth is currently on display at the San Diego Computer Museum.The 4th Era – Fully Electronic Computers with Vacuum TubesThis is often referred to as “The First Generation of Computers”. Fully electronic computers do not rely on moving parts. This makes them faster and more reliable. The vacuum tubes used at the time still had their share of drawbacks. First, vacuum tubes are big and bulky, about the size of a normal light bulb. With 8 of these vacuum tubes the computer could process a single character. In order to process anything of consequence a computer would need thousands and thousands of vacuum tubes. This is why the early computers were so massive back then. Imagine how big a machine would need to be if it had about 17,000 normal size light bulbs. Not only are vacuum tubes about the same size as a light bulb, they also generate heat and burn out like a light bulb. If a vacuum tube burns out, the computer stops working and needs to be replaced. The heat is a bigger issue. A single light bulb can get pretty hot after a while. Imagine the heat produced by 17,000 light bulbs. At the time, workers complained about unsafe working conditions due to the intense heat.You may notice a little overlap in the dates between my third and fourth eras. This is because people did not stop making electro-mechanical devices the instant that fully electronic computers were invented.ABC, 1940The very first electronic digital computer was invented by John Atanasoff and Clifford Berry at Iowa State University. They called it the Atanasoff Berry Computer or ABC. This device was not a “general purpose computer”, nor was it programmable. It was specifically designed to solve systems of linear equations.Colossus, 1943This was the first electronic digital computer that was somewhat programmable. It was designed by an engineer named Tommy Flowers based on the work by Max Newman, a mathematician and code breaker. During the next couple years a total of 10 Colossus computers were made. They were used by code breakers in England to help decrypt the secret coded messages of the Germans during World War II. ENIAC, 1946The ENIAC (Electronic Numerical Integrator And Computer) was the first electronic general purpose computer. It was invented by John Mauchly and J. Presper Eckert. This computer was twice the size of the Mark-I, contained 17,468 vacuum tubes, and was programmed by rewiring the machine. The ENIAC was capable of performing 385 multiplication operations per second. In 1949, John Von Newman, and various colleges, used the ENIAC to calculate the first 2037 digits of PI (shown below). The process took 70 hours. This was actually the first time a computer had been used to calculate the value of PI!3.14159265358979323846264338327950288419716939937510582097494459230781640628620899862803482534211706798214808651328230664709384460955058223172535940812848111745028410270193852110555964462294895493038196442881097566593344612847564823378678316527120190914564856692346034861045432664821339360726024914127372458700660631558817488152092096282925409171536436789259036001133053054882046652138414695194151160943305727036575959195309218611738193261179310511854807446237996274956735188575272489122793818301194912983367336244065664308602139494639522473719070217986094370277053921717629317675238467481846766940513200056812714526356082778577134275778960917363717872146844090122495343014654958537105079227968925892354201995611212902196086403441815981362977477130996051870721134999999837297804995105973173281609631859502445945534690830264252230825334468503526193118817101000313783875288658753320838142061717766914730359825349042875546873115956286388235378759375195778185778053217122680661300192787661119590921642019893809525720106548586327886593615338182796823030195203530185296899577362259941389124972177528347913151557485724245415069595082953311686172785588907509838175463746493931925506040092770167113900984882401285836160356370766010471018194295559619894676783744944825537977472684710404753464620804668425906949129331367702898915210475216205696602405803815019351125338243003558764024749647326391419927260426992279678235478163600934172164121992458631503028618297455570674983850549458858692699569092721079750930295532116534498720275596023648066549911988183479775356636980742654252786255181841757467289097777279380008164706001614524919217321721477235014144197356854816136115735255213347574184946843852332390739414333454776241686251898356948556209921922218427255025425688767179049460165346680498862723279178608578438382796797668145410095388378636095068006422512520511739298489608412848862694560424196528502221066118630674427862203919494504712371378696095636437191728746776465757396241389086583264599581339047802759009946576407895126946839835259570982582The ENIAC cost $500,000. Adjusted for inflation, that would be almost $6,000,000 in 2013. Unlike earlier computers like the Z3 and the Colossus, which were military secrets, the public actually knew about the ENIAC. The press called it “The Giant Brain.”EDVAC, 1949The EDVAC (Electronic Discrete Variable Automatic Computer) was the successor to the ENIAC and was also invented by John Mauchly and J. Presper Eckert. The main improvement in the EDVAC was that it was a Stored Program Computer. This meant is could store a program in electronic memory. (Earlier computers stored programs on punched tape.) The EDVAC could store about 5? kilobytes. Like the ENIAC, this computer also cost about half a million dollars. UNIVAC I, 1951The UNIVAC I (UNIVersal Automatic Computer) was the world’s first commercially available computer. While the Mark-I and the ENIAC were not “for sale”, any company with enough money could actually purchase a UNIVAC computer. This computer was mass-produced and commercially successful. The UNIVAC-I became famous when it correctly predicted the results of the 1952 presidential election.The 5th Era – Computers with Transistors/Integrated CircuitsThe invention of the transistor changed computers forever. This is often referred to as “The Second Generation of Computers”. The University of Manchester made the first transistor computer in 1953. Transistors have certain key advantages over vacuum tubes. First, they are much smaller. This allowed computers to become smaller and cheaper. Second, transistors do not get hot and do not burn out like vacuum tubes. This means we no longer have to deal with the issues of intense heat and replacing burned out vacuum tubes. Integrated Circuit, 1958Jack Kilby, of Texas Instruments, in Richardson, Texas, developed the first integrated circuit. Integrated circuits have multiple transistors on a tiny thin piece of metal, often called a chip. Jack Kilby used germanium. Six months later Robert Noyce came up with his own idea for an improved integrated circuit which uses silicon. He is now known as “The Mayor of Silicon Valley”. Both gentlemen are credited as co-inventors of the integrated circuit. This began a period which is often referred to as “The Third Generation of Computers”. As technology improved, we developed the ability to put thousands, then millions, and now billions of transistors on what we now call a microchip. Microchips lead to microprocessors which have an entire CPU on a single chip. The first 2 came out in 1971. These were the TMS 1000 and the Intel 4004 which was a 4-bit microprocessor. Video Games, 1958/1962Video games have been around for much longer than most people realize. Your parents probably played video games when they were kids. It is even possible that some of your grandparents played them as well. If you ask most people what the first video game is, they would say Pong. This was made by Atari in 1972. This actually was not the first video game. It was the first successful arcade game.The first video game was called Tennis for Two. It was created by William Higinbotham and played on a Brookhaven National Laboratory oscilloscope. Since this game did not use an actual computer monitor, some give credit for the first video game to SpaceWar written by Stephen Russell at MIT in 1962.What some people do not realize, is that video games almost disappeared completely long before you were born. In 1982, one of the biggest blockbuster movies of all time, E.T. the Extra Terrestrial, came out. Atari wanted a video game based on the movie to be released in time for Christmas. The programmer had just over 5 weeks to create the game. The game was a huge flop, causing Atari to lose millions of dollars. It was not only considered the worst video game ever made, it was cited as the reason for the Video Game Industry Crash of 1983. Soon after, some computer literacy textbooks stated that “the video game fad is dead.” Luckily, Nintendo revived the market in 1985 with Super Mario Bros.In 2012, G4 ranked Super Mario Bros. #1 on its Top 100 Video Games of All Time special for “almost single-handedly rescuing the video game industry”.IBM System/360, 1964In the early days of computers, what exactly constituted a computer was not clearly defined. The fact that different groups of people had different computer needs did not help. The 2 biggest groups were the mathematicians and the business people. The mathematicians wanted computers good in number crunching. Business people wanted computers good in record handling. Companies like IBM would actually make different devices to satisfy the needs of each group. IBM’s System/360 changed that by creating a series of compatible computers that covered a complete range of applications. They worked for the mathematicians and the business community. All of the computers in this series were compatible. This means that a program created on one System/360 computer can be transported and used on another System/360 computer. The different computers in this series sold for different prices based on their speed. System/360 essentially standardized computer hardware. It is also responsible for several other standards including the 8 bit byte.Apple II Personal Computer, 1977In 1976 Steve Jobs and Steve Wozniak created a computer in Steve Jobs’ parents’ garage. This was the original Apple computer. Only 200 or so of these computers were made and sold as kits. With the profit they were able to form Apple Computer Inc. A year later, they released the much improved Apple-II computer. It became the first commercially successful personal computer. Several versions of the Apple-II were released over the years. There actually was an Apple-III computer released in 1980, but it was not successful. The Apple-IIe (enhanced), Apple-IIc (compact) and the Apple-IIgs (graphics & sound) actually came out after the failed Apple-III. For a time, the slogan at Apple Computer Inc. was “Apple II Forever!” Apple-II 1977Apple-III 1980Apple-IIe 1983Apple IIc 1984Apple IIgs 1986On January 9, 2007 Apple dropped the “Computer” from its name and simply became Apple Inc. This was due to the company’s diversification into the home entertainment and cell phone puter Applications, 1979The first 2 applications or “apps” available for personal computers (not including video games) were electronic spreadsheets and word processing. Dan Bricklin created VisiCalc, a spreadsheet program, which became the first wide spread software to be sold. He initially lived in a hut somewhere in the mountains of Montana and received an average of $30,000 a week for his invention. He did not benefit from the tremendous boom in the spreadsheet market, because his software could not get a patent. Later that same year, MicroPro released WordStar, which becomes the most popular word processing software program in the late seventies and eighties. This was before the age of WYSIWYG (What You See Is What You Get) word processors. Feature like bold and italics would show up on the printer, but not on the screen. Since word processors did not yet use a mouse, everything was done by typing a combination of keys. These applications mean you never actually spread and large sheet of paper over a table to look at hundreds of numbers. You also will never know the horror of having to retype a 30 page report because of a simple mistake on the first page.Note, the carrot ( ^ ) symbol in the Edit Menu above refers to the <Control> key. IBM PC, 1981As far as the business world was concerned, these new personal computers were amusing toys. No one would even think of using a personal computer to do any serious business work. That changed when IBM introduced the IBM P.C. It is a computer with a monochrome monitor and two floppy drives. Hard drives were not yet available for personal computers. IBM's entry into the personal computer market gives the personal computer an image as a serious business tool and not some electronic game playing machine. MS-DOS, 1981IBM decided not to create its own operating system for the personal computing market and decided to out-source development of its operating system for its trivial little personal computer department. Many companies rejected IBM’s proposal. Microsoft, an unknown little company run by Bill Gates, agreed to produce the operating system for the IBM Personal Computer. Over the years, Microsoft grew and became a company larger than IBM.Portability and Compatibility, 1982The Compaq Portable is known for two things. It was the first portable computer. By today’s standards it was nothing like a modern laptop. The 28 pound computer was the size of a small suitcase, and looked very much like one as well. The removable bottom was the keyboard which would reveal a 9 inch monitor and a couple of floppy drives. Compaq was also the first computer to be 100% compatible with an IBM PC.Macintosh, 1984Apple started to sell the Apple Macintosh computer which uses a mouse with a GUI (Graphics User Interface) environment. The mouse technology was already developed earlier by Xerox Corporation and Apple actually introduced this technology with its Lisa computer in 1982. The Lisa computer cost $10,000 and was a commercial failure. The "Mac" was the first commercially successful computer with mouse/GUI technology. The computer was introduced to the public with the now famous 1984 commercial during Super Bowl XVIII. This started a trend of having the best commercials air during the super bowl. While the Lisa was named after Steve Jobs’ daughter, the Macintosh was not named after anyone. A macintosh is a type of apple – and the favorite of Jes Raskin, the leader of the team that designed and built the Macintosh – hence its name.Windows 1.0, 1985Original called Interface Manager, the very first version of windows was technically an Operating Environment and acted as a front while MS-DOS was running in the background. While it uses a GUI type interface, it looks somewhat different from the Macintosh.Keeping track of the many versions of Windows can be confusing. Today many people use either Windows 7 or 8. The original Windows had several versions finishing in 1992 with Windows 3.1 – but that still technically is part of what is now called “Windows 1”. To further complicate matters Microsoft started releasing different versions of windows for home and professional use. While the first few versions of the home editions were still based somewhat on MS-DOS, the professional operating systems were based on NT (New Technology). Starting with Windows XP (eXPerience), both home and professional editions are based on NT, even though the “NT” is dropped from their names.VersionHome EditionsProfessional / Power User Editions1Windows 1.0 – 3.1Windows NT 3.12Windows 95Windows NT 3.513Windows 98Windows NT 4.04Windows MillenniumWindows 20005Windows XP Home EditionWindows Home ServerWindows XP Professional EditionWindows Server 2003Windows Server 2003 R26Windows VistaWindows Server 20087Windows 7Windows Home Server 2011Windows Server 2008 R28Windows 8, 8.1Windows Phone 8Windows RTWindows Server 2012Windows Server 2012 R2Windows 95, 1995Microsoft introduced their second Windows operating system. This time, the GUI was very similar to that of the Macintosh. The appearance of the GUI would not radically change until Windows 8.Tianhe-2 Supercomputer, 2013In November 2012, the fastest computer in the world was officially the Titan Supercomputer made by the US Department of Energy. However, in June 2013 China’s Tianhe-2 took the title. It can perform 33,860,000,000,000,000 floating point operations in 1 second. This is almost twice as fast as the Titan. Remember that the ENIAC could perform 385 multiplication operations in a second. If we ignore the fact that a floating point operation is more complicated than a simple multiplication operation, it would still mean the Tianhe-2 is about 88 trillion (88,000,000,000,000) times as fast as the ENIAC.1.7 What Is Programming?Computer science is a highly complex field with many different branches of specialties. Traditionally, the introductory courses in computer science focus on programming. So what is programming? Let us start by straightening out some programming misconceptions. Frequently, I have heard the phrase: just a second sir, let me finish programming the computer. I decide to be quiet and not play teacher. The person “programming” the computer is using some type of data processing software. In offices everywhere, clerks are using computers for a wide variety of data processing needs. Now these clerks enter data, retrieve data, rearrange data, and sometimes do some very complex computer operations. However, in most cases they are not programming the computer. Touching a computer keyboard is not necessarily programming.Think about the word program. At a concert, you are given a program. This concert program lists a sequence of performances. A university catalog includes a program of studies, which is a sequence of courses required for different college majors. You may hear the expression, let us stick with our program, which implies that people should stick to their agreed upon sequence of actions.In every case, there seem to be two words said or implied: sequence and actions. There exist many programs all around us and in many cases the word program or programming is not used. A recipe is a program to cook something. A well- organized recipe will give precise quantities of ingredients, along with a sequence of instructions on how to use these ingredients. Any parent who has ever purchased a some-assembly-required toy has had to wrestle with a sequence of instructions required to make the toy functional. So we should be able to summarize all this programming stuff, apply it to computers and place it in the definition diagram below.Program DefinitionA program is a sequence of instructions, which enables a computer to perform a desired task.A programmer is a person who writes a program for a computer.Think of programming as communicating with somebody who has a very limited set of vocabulary. Also think that this person cannot handle any word that is mispronounced or misspelled. Furthermore, any attempt to include a new word, not in the known vocabulary, will fail. Your communication buddy cannot determine the meaning of a new word by the context of a sentence. Finally, it is not possible to use any type of sentence that has a special meaning, slang or otherwise. In other words, kicking the bucket means that some bucket somewhere receives a kick.A very important point is made here. Students often think very logically, write a fine program, and only make some small error. Frequently, such students, it might be you or your friends, become frustrated and assume some lack of ability. It is far easier to accept that small errors will be made, and that the computer can only function with totally clear, unambiguous instructions. It is your job to learn this special type of communication.1.8 A Brief History of ComputerProgramming LanguagesI know what you are thinking. “Didn’t we just have a history section?” True, we did, but this one is specifically about programming languages. In the earlier history section, it mentioned that computers like the ENIAC were incredibly difficult to program. Programming the ENIAC required rewiring the machine, and machines like the ENIAC had thousands of wires. While the Mark-I was technically a calculator, simply entering the numbers required manipulating its 1,440 switches. Just entering a program into those early computers was hard enough, but what if the program did not work? On the ENIAC, you would be looking at a sea of thousands of wires, and you need to find out which is plugged into the wrong port. On the Mark-I, maybe you flipped switch #721 up, but it should actually be down. Seeing that amidst the other 1,439 switches is not easy.Machine Language / Machine CodeProgramming in Machine Language a.k.a. Machine Code means you are directly manipulating the 1s and 0s of the computer’s binary language. In some cases, this means you are manipulating the wires of the machine. In other cases, you are flipping switches on and off. Even if you had the ability to “type” the 1s and 0s, machine language would still be incredibly tedious.Assembly Language and the EDSAC, 1949Assembly Language was first introduced by the British with the EDSAC (Electronic Delay Storage Automatic Computer). This computer was inspired by the EDVAC and was the second stored-program computer. EDSAC had an assembler called Initial Orders which used single-letter mnemonic symbols to represent different series of bits. While still tedious, entering a program took less time and fewer errors were made.“Amazing Grace”Grace Hopper was mentioned a couple sections ago as one of the first programmers of the Mark-I. She is also credited with making the term debugging popular after a couple colleagues pulled the first literal computer bug (a moth) out of the Mark-II. Her biggest accomplishments are in the area of computer languages. In the 1940s, Grace Hopper did not like the way we were programming computers. The issue was not that it was difficult. This issue was that it was tedious. She knew there had to be a better way. The whole reason computers were invented in the first place was to do the repetitive, tedious tasks that human being do not want to do. It should be possible to program a computer using English words that make sense, rather than using just 1s and 0s.Imagine the presidents of the United States and Russia want to have a meeting. The American President speaks English. The Russian President speaks Russian. Who else needs to be at this meeting? They need a translator. Grace Hopper understood this. If she wanted to program a computer with English words, she would need some type of translator to translate these words into the machine language of the computer. Grace Hopper wrote the first compiler (a type of translator) in 1952 for the language A-0. The language itself was not successful because people either did not understand or did not believe what a compiler could do. Even so, A-0 paved the way for several other computer languages that followed. Many of which were also created in part or in whole by Grace Hopper.Her immeasurable contributions to computer science have earned her the nickname “Amazing Grace”. The Cray XE6 Hopper supercomputer and the USS Hopper Navy destroyer are also named after her. High-Level Languages and Low-Level LanguagesLanguages like Machine Language or Assembly Language are considered Low-Level Languages because they function at, or very close to, the level of 1s and 0s. In contrast, a High-Level Language is a language that uses English words as instructions. BASIC, Pascal, FORTRAN, COBOL, LISP, PL/I and Java are all examples of high-level languages. Do realize that this is not the same English that you would use to speak to your friends. A high-level language consists of a set of specific commands. Each of these commands is very exact in is meaning and purpose. Even so, programming in a high-level language is considerably easier than programming in a low-level language. It may surprise you that some programmers still write programs in a low level language like Assembly. Why would they do that? Comparing high-level languages and low-level languages is like comparing cars with automatic transmissions and cars with manual transmissions. There is no question that a car with an automatic transmission is easier to drive. Why do car companies still make manual transmission cars? Why do professional racecar drivers drive manual transmission cars? The issue is control. You have greater control over the car with a manual transmission. At the same time, if you do not know what you are doing, you can really mess up your car. It is the same with a low-level language. You have greater control over the computer with a low-level language. At the same time, if you do not know what you are doing, you can really mess up your computer. We are now going to look at a series of programming languages. Some of these languages have a specific strength. Others are listed because they contributed to computer science in some way.FORTRAN, 1957FORTRAN (FORmula TRANslator) was invented by John Backus at IBM. It is the first commercially successful programming language. It was designed for mathematicians, scientists and engineers. FORTRAN was very good a “number crunching”, but it could not handle the record processing required for the business world.LISP, 1958LISP (LISt Processing) was designed by John McCarthy while he was at MIT. It is known for being one of the languages specifically designed to help develop artificial intelligence. LISP introduced several important programming concepts which are used in modern programming languages today.COBOL, 1959COBOL (COmmon Business Oriented Language) was created for the business community. Grace Hopper was the primary designer of the language. Unlike FORTRAN, COBOL was specifically designed to handle record processing. COBOL became extremely successful when the Department of Defense adopted COBOL as its official programming language.FORTRAN vs. COBOL, Early 1960sIn the early 1960s computer design was not yet standardized and was strongly influenced by programmers’ languages of choice. FORTRAN programmers wanted computers that were suited for number crunching. COBOL programmers wanted computers that were suited for record handling. Companies like IBM would have different models for “FORTRAN programmers” and “COBOL Programmers”. In 1964, the IBM System/360 family of computers standardized hardware and was suitable for both.PL/I, 1964After IBM standardized hardware with System/360, they set out to standardize software as well by creating PL/I (Programming Language 1). This language combined all of the number crunching features of FORTRAN with all of the record handing features of COBOL. The intention was that this language would be “everything for everyone”. The reality was that the FORTRAN programmers did not like the COBOL features, the COBOL programmers did not like the FORTRAN features, and new programmers found the language too complex and overwhelming to learn.BASIC, 1964Tom Kurtz and John Kemeny created BASIC (Beginners All-purpose Symbolic Instruction Code) at Dartmouth College. Their intention was that a simple, basic, easy-to-learn language would give non-math and non-science majors the ability to use computers.The use of BASIC became widespread when personal computers hit the market. The first personal computer was the Altair which came out in 1976. The early PCs like the Altair and the Apple had very little memory and could not handle big languages like FORTRAN or COBOL. They were able to handle a small language like BASIC. Most personal computers in the late 1970s and early 1980s were shipped with BASIC. The Altair was shipped with Altair BASIC a.k.a. Microsoft BASIC. This was actually the first product created by Microsoft.Pascal, 1969A number of college professors did not like BASIC because it did not teach proper programming structure. Instead, it taught quick-and-dirty programming. Niklaus Wirth, a Swiss professor, decided to create a language specifically for the purpose of teaching programming. He named this new language Pascal after Blaise Pascal. Unlike PL/I, Pascal is a very lean language. It has just enough of the math features of FORTRAN and just enough of the record handling features of COBOL to be functional. In 1983, the College Board adopted Pascal as the first official language for the Advanced Placement? Computer Science Examination.C, 1972In 1966, BCPL (Basic Combined Programming Language) was designed at the University of Cambridge by Martin Richards. This language was originally intended for writing compilers. In 1969, Ken Thompson, from AT&T Bell Labs, created a slimmed down version of BCPL which was simply referred to as B. In 1972, an improved version of B was released. This could have been called B 2.0 or B 1972 or even B Vista. Instead, they simply decided to call the new language C. In 1973, C was used to rewrite the kernel for the UNIX operating system.C++, 1983As the demands for sophisticated computer programs grew, so did the demand for ever more sophisticated computer programming languages. A new era was born with a powerful programming technique called Object Oriented Programming (OOP). Bjarne Stroustrup wanted to create a new language that uses OOP, but did not want programmers to have to learn a new language from scratch. He took the existing, very popular language C and added OOP to it. This new language became C++. In 1997, C++ replaced Pascal as the official language for the AP? Computer Science Exam.C and C++ are sometimes considered to be medium-level languages. This is because they have the English commands of a high-level language as well as the power of a low-level language. This made C, and later C++, very popular with professional programmers. Java, 1995Java was released by Sun Microsystems. It is the first Platform Independent computer language. “Platform Independence” means that a program created on one computer will work and have the exact same output on any computer. For example, if you wrote a Java program that worked on a Dell computer, it would also work on a Toshiba. You would notice that not only does the program compile and execute... it will have the exact same output.Like C++, Java uses Object Oriented Programming. There are many other similarities between Java and C++, but there is one major difference. C++ was designed to be backwardly compatible with the original C. This means that in C++ you have a choice. You decide to use OOP or not to use OOP. Java does not give you that choice. You must use OOP. Now the College Board likes OOP and they want computer science students to learn OOP. For this reason, in 2003, Java replaced C++ as the official language for the AP? Computer Science Exam.Lego NXT, 2006A new kind of programming has come about that is very high-level. In this point & click programming environment, programmers click on different program blocks. Each block performs a different task. By creating a sequence of these blocks, people can program the computer.In 2010, Oracle acquired Sun Microsystems. This means to download the latest version of Java, you need to go to Oracle’s website. Java has continued to improve in the same manner as when Sun Microsystems owned the company.In 1998, the Lego Corporation created their first point-and-click language for use with their Lego Mindstorms robots. In 2006, they released their next language, and decided to call it NXT. In 2009, NXT 2.1 was released.1.9 SummaryThis has been an introductory hodge-podge chapter. It is awkward to jump straight into computer science without any type of introduction. Students arrive at a first computer science course with a wide variety of technology backgrounds. Some students know a little keyboarding and Internet access along with basic word processing skills taught in earlier grades. Other students come to computer science with a sophisticated degree of knowledge that can include a thorough understanding of operating systems and one or more program languages as well.The secret of computer information storage and calculation is the binary system. Information is stored in a computer with combinations of base-2 ones and zeroes. Individual binary digits (bits) store a one or a zero. 1 means true and 0 means false. A set of eight bits forms one byte. A byte can store one character in memory with ASCII, which allows 256 different characters. The newer, international Unicode uses two bytes to store one character. This allows storage for 65,536 different puters use hardware and software. Hardware peripheral devices are the visible computer components. There are external peripheral devices, such as monitors, keyboards, printers and scanners. There are also internal peripheral devices like disk drives, CD ROM drives, network interface cards and video cards.There are two types of software: application software and system software. Application software includes the common applications of word processing and spreadsheets, but also tax return software and video games. System software is something like Windows 8 or UNIX. It runs the computer and allows the user to personalize the computer to his or her needs and organize data. Sun Microsystems created Java to be a programming language that is portable on many computer platforms, a so-called platform-independent language. They also wanted the language to be compatible with web page development. ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download