The New Media and Communications for a Digital World
Chapter 1
Introduction:
The widespread proliferation of digital communication technologies has changed the future of mass communication. This digital age is changing how people are receiving and sending information on all levels.
The transmission of multimedia data is made possible by the digitization of the visual and audio information. This is a key factor for consideration. This digital conversion translates the information into a binary format. This allows the data to processed by a computer just like other binary information. More importantly, the data can be distributed like any other binary file. No matter what the bits represent they are still just a string of numbers. There is no difference between the coded information of text, sound, or video. The bits can be distributed together and interpreted and processed by the receiver. These packets of information have what are called "headers" that tell the receiver what kind of information it is and how to process it. As a result, a video or multimedia presentation can be acquired anywhere a properly equipped computer or receiver can be connected to a telecommunications system.
Whether it is widespread dissemination or interpersonal communication, there is one telecommunications system that is forcing the world to change the way it communicates; the World Wide Web. As more people are connected to the Web the popular media machine will have to adapt. This means that information providers are going to one day to have to utilize the Web as fundamental part of their dissemination strategy. The new incarnation of information providers will materialize first in the United States.
“In a peculiarly American way we have often sought technical solutions to social problems. Indeed, more than anything else, this tendency defines American information culture…. And so we attach an enormous importance to new machines, especially information machines, layering them with all our hopes and dreams” (Lubar, 1993, p. 10).
American society openly accepts and often encourages technological advances in communications. This usually means that society makes incremental adjustments as some new device is unveiled that makes communication easier, better, clearer, or more fun. In some cases the device becomes an unexpected status symbol as with the cellular phone or pager.
The world wide web is much more than a new gadget. It is a method of transmission. This makes it more significant but, the there are other recently introduced methods of transmission that were heralded by some to revolutionize communications and have not. Since Direct Broadcast Satellite television was introduced in the United States it has gained a respectable degree of acceptance but it has not greatly called into question the future of mass communications and the news media the way the Web has. DBS was just a new way to present the same thing. It can offer more channels but it is still broadcasting. This means a one way flow of information.
The world wide web is a two way form of communication that, as stated in its name, is world wide. It allows individuals to interact from anywhere on earth that there is a connection. As a result, the communications industry will never be the same and the changes that have come about are only the beginning. There are fundamental changes coming that will change the methods of information providers in every aspect. As technology races forward the limitations and capabilities of the Web change. News media will have to make adjustments to meet the constraints of the medium and the new demands of the audience. The information providers will have to temper the richness of their presentation to facilitate the limitations of the medium.
The purpose of this thesis is to illustrate the changes that are coming for way people will receive their news as a result of the advance of digital technologies. It will focus primarily on the world wide web as the chief catalyst for the new news media in a digital world. Why should we care? This future news media experience is going to shape the way people see the world around them. Their knowledge and understanding of current events will shaped by it; for better or worse.
Chapter 2
Basic Explanation of Digital Information
Sampling and Coding
In order to discuss how the web has changed and will continue to change communications it is necessary to have an understanding of what digital technology is. The basis for digital technologies is ones and zeros. This means that all digital information is binary. How does this happen? At the most basic level the information is coded this way. In computing, text is converted into binary form by asigning a value to each character using a string of ones and zeros. A standard binary code for the representation of text was established called ASCII. ASCII stands for the American Standard Code for Information Interchange. The establishment of a standard protocol for text recognition was essential for the creation of person to person, textual based communication, on networked computer systems. The ability for computers to interchange data regardless of the type of computer system was fundamental in the rise of modern communications technology (Burger, 1993).
The process is a little more difficult when dealing with sound or video. The visual and auditory information found in nature is analog. This means that the information is made up of continuous signals that have smooth fluctuations. The sound or light can be represented by waves. That which we see is made possible by light waves and that which we hear is created by sound waves. These waves can be digitized after they go through a process of conversion from analog to digital.
Like ASCII text, audio and video information is also coded for digitization. In this case the information is not assigned a random code but it is sampled. The sampling process involves recording the frequency and amplitude of waves at a set
[pic][pic]
Figure 2.1 Figure 2.2
interval. Sound and video are composed of waves. And these waves are sampled at consistent intervals so that the waves can be reproduced. The rate of sampling needed to create a satisfying visual or audio playback is based on the limits of human perception and the limits of the hardware. The intended use or audience is also taken into account. The faster the sampling, the more storage needed for the data and the more elaborate the equipment.
To faithfully reproduce a sound or moving image the rate at which the information must be sampled is twice the rate of the highest frequency to be digitally represented. This is known as the Nyquist Theorem. What this means is anything that is sampled at a slower rate than two times the highest frequency may suffer perceivable quality loss. In some cases a slight loss of quality is acceptable when considering limited storage or bandwidth (Burger,1993).
CD audio is sampled at a rate of about 44,000 times a second (44.1kHz). This creates the illusion of seamless sound waves. This is a fast sampling rate that is designed for the highest quality with little regard for storage space. This type of sampling rate is often not practical for computer multimedia applications but satisfactory sound can be achieved at slower rates (Holsinger, 1994). The creation of new data streaming methods is eliminating these limiting factors and will be discussed in a later chapter.
The Advantages of Digital Communication
Once digitized, the information is binary. Since the information is based on numbers it can be manipulated like any set of numbers. These numbers are what we know as bits. These bits are what computers process, receive, and transmit. These simple bits have already changed communication as we know it.
One of the characteristics that makes a bit so valuable is its ability to travel. Any one who has used Email has sent ASCII coded bits to another location. Bits can travel through a variety of media and they can travel quickly: The speed of light to be exact(depends on the medium). When compared to analog signals this looks pretty good. This is actually just one small part of the list of advantages to digital data.
Analog signal transmission—as in traditional television, radio, or phone transmission—is subject to signal attenuation. It is the nature of the waves being transmitted to be susceptible to interference. When the data is transmitted digitally some data can be lost but, it is much more reliable. In addition, information can be added to the signal to correct any errors that result during transmission. “On your audio CD, one-third of the bits are used for error correction. Similar techniques can be applied to existing television so that each home receives studio-quality broadcast—so much clearer than what you get today that you might mistake it for so-called high definition.” (Negroponte, 1995, p. 17) This sort of data correction exists on the web to try to ensure that information gets to its destination without loss.
The transmission of digital data does not require as much bandwidth as analog signals. This means that information can be sent down a smaller pipe. An example of this is the difference between an analog television signal and a digital. The spectrum allocation for one channel of analog television can be filled with several digital channels that can offer a better picture. On the web, the rules of bandwidth are the same. The amount of time necessary to receive all of a file is dependent upon the size of the file and the available bandwidth. The ability to compress the data helps to make digital information very valuable.
Consider this information in a bigger sense. Anything that can be represented by bits can now be distributed reliably around the world at amazing speed. The dissemination or transport of this information is inexpensive. And if the information is traveling via the web it never has to leave the atmosphere to find a satellite. This makes digital transfer very attractive for all sorts of applications, especially to those in the information business. The transmission process is only limited by the speed of the processor, capacity of the transfer media, and available bandwidth. If the amount of data can be reduced then faster transmission can be achieved. Hence the creation of data compression which will be discussed later. First, lets discuss how this grand network we call the world wide web got started.
Chapter 3
Dissemination of Information:
The History of the Internet and the World Wide Web
All of these discussions about digital information are important because all of the information on the web is digital. The world wide web is the network through which computers around the world can send or receive these digital signals. It all started here in the United States as military project. In 1968 the Department of Defense was in need of a communications network that would be able to function even if portions of the network were suddenly eliminated. It was the height of the cold war and the U.S. military needed a communication method that could withstand a nuclear attack. The solution was ARPANET (Advanced Research Projects Agency Network) (Kroll, 1992). This network utilized packets of information that could reliably find their way to a destination. The key was that the protocol used by the network would be able to detect a path that would get the information there reliably.
The first nodes were set up in 1969. Much of the work on the network was being done by computer scientists and engineers at universities so these nodes were set up in the universities (Prater, 1994, p. 163). The decentralized architecture of the network made it relatively easy to add more computers and expand the network. The network grew quickly and found its greatest use among researchers and universities. By 1983 the network had grown enough to split the network into two separate ones, one for research and education, the other for the military.
The fundamental factor in the explosive growth of the ARPANET was the creation of a standard protocol for the transmission of data. The TCP/IP (Transmission Control Protocol/Internet Protocol) set the standard and made it possible for different computers to communicate across vast networks. The value of this communication medium led to the creation of the NFSNET. The National Science Foundation invested money to connect universities and research centers making TCP/IP the standard for the Internet (Prater, 1994, p. 150).
“The technology and networks were adopted by other government agencies and countries, as well as the private business sector. Today, Internet technology and the Internet have found massive acceptance and use by hundreds of thousands of organizations around the world…. As of 1 Feb 1995, the Internet consisted of more than 50,000 networks in 90 countries. Gateways that allow at least Email connectivity extend this reach to 160 countries. At the end of 1994, 5 million computers were indicated as actually reachable - with an estimated total of 20-40 million users. Network growth continues at around 10 percent per month.”[1]
In the late 1980’s the joining of the Internet with other similar networks around the world made the Internet even more valuable. It enabled researchers around world to share their findings. The problem was that it was still reserved for the computer literate. The interface was not user friendly and was usually just text requiring a knowledge of assorted computer language commands. For the Internet to realize its full potential a new system would have to be developed. It came through the European Particle Physics Laboratory (CERN). In March of 1989 a scientist named Tim Berner’s-Lee proposed to CERN a project that would allow researchers to read each other’s work over the Internet. Berners-Lee proposed a new language that would include hypertext. This Hypertext Markup Language (HTML) is the language that web pages are written in today. The Hypertext Transfer Protocol (HTTP) was created as the standard to handle these new documents (Magdid, et. al:,1995, p. 9)
By July 1992 the idea and software for this new World Wide Web had been disseminated through CERN. It still lacked the sort of widespread impact that we see today because the software was only designed for expensive computer work stations. The browser was a text-based browser but, the idea of the World Wide Web was gaining acceptance on the Internet. In 1993 the National Center for Supercomputing Applications (NCSA) released a browser that worked with more common computers. Suddenly, the World Wide Web could be utilized by a much larger number of people. The new browsers were more stable, reliable, and relatively user friendly (Magdid, et. al, p. 12). Then in 1993 a man named Marc Andreesson suggested the addition of an additional HTML tag that would allow a document to include images. [2] This suggestion led the Web to become a truly multimedia medium.
How Multimedia was Introduced
When the first browsers were introduced in 1992 they created a revolution on the web. The web had been textual domain in which only the "techno geeks" could be comfortable. Browsers that utilized a friendlier (relatively) graphic interface and made the transfer and display of images possible created a broader audience. The natural progression was to try to improve upon this basic display of images by including sound. And while we're at it why not include the moving image? We can transmit plenty of great visual and audio information through coaxial cable.
The idea seems simple enough to the non-techno nerd but, the differences in transmission methods, media, and protocols makes it much more difficult. The first transmission of images and sound was simply the FTP (file transfer protocol) of a file. This is the method of transferring a data file from one place to another. Once received, the file could be stored and then processed by some application on the clients machine. In 1992 a format for the identification of file type was proposed by Nathaniel S. Borenstein at the ULPAA '92 Conference in Vancouver. Borenstein thought that it would be a great idea for people to be able to do "multimedia email." He proposed the creation of MIME (Multipurpose Internet Mail Extensions) extensions for use on the Internet. Borenstein’s MIME extensions were later incorporated into browsers for the Web.[3] These extensions are now used by browsers to identify what kind of file to interpret.
As browsers became more powerful they could process more of the information within the program. Netscape, currently one of the most popular browsers, now includes a player for .au and .aif audio files. When combined with the ability to display .gif and .jpg images in a document, this makes for a better, but still very basic, multimedia experience.
The addition of the programming language Java has further increased the capability of web browsing. It enables the browsers to update images, trigger sounds, react to user input, and more. And now there are over thirty different programs that enhance the capability of certain browsers.[4] These “plugins” are being created by independent software makers to add functionality. There are plugins that allow users to view pre-formatted documents, view VRML (Virtual Reality Markup Language) documents, view CAD (Computer Aided Design) documents, and more.[5]
All these improvements add versatility to the presentation of materials on the web but file size and bandwidth are still key issues. The more elaborate graphics and sounds a web page has, the larger its file size will be. This puts real constraints on the designer and the methods with which information is presented. This is why digital compression is important. The information provider can include more information in a smaller package.
The Progression of Compression
The technology behind data compression is difficult to follow. The advances in compression capabilities are coming so fast it is hard to keep up. There is a great demand compression techniques that will not noticeably lose information. The demand is driven by the need to transmit and store more digital information. The compression of images has been one of the biggest motivations and challenges facing engineers because images require a great deal of data to accurately reproduce an original.
Engineers began working on compressing video shortly after the widespread dissemination of television. The drive to create a color picture created a need to add more information to the existing black and white format. The work continued in the late 1950s as the AT&T worked to develop the in famous live video and phone combination that permeates science fiction novels.(Van Tassel, 1994, p. 12) When the standards for television were set the push from television broadcasters for compression was mitigated. Many companies and institutions continued to work on the technology.
Real progress was made throughout the 1960s and 1970s in working to digitize and compress visual data. The Media Lab, at the Massachusetts Institute of Technology, began its work with electronic imaging in the early 1970’s. Other than research interests there was little incentive for work in the field. That changed when there was serious money at stake.(Brand, 1988)
Once again, broadcasters created an atmosphere ripe for innovation. The lure of profits and the subsequent investment drove the advance of compression technologies. They were actually prospective broadcasters. The creation of their services depended on digital video compression (DVC). Direct Broadcast Satellite systems were going to need to compress many channels into a stream of information that formally carried one. The engineers knew that a reliable, affordable solution had to be found. In this case, the pressure was on because the companies had already received the space allocation for their satellites and invested large sums of money into them.
Video Compression Pushes Forward
Compression techniques and hardware improved quickly. The satellite broadcasters needed fast, reliable systems that could offer choices to the audience. In addition, they wanted to reduce the size of the dish required to receive the signal. Smaller dishes mean more powerful transponders are needed and there is a limited number of transponders that can fit onto a satellite. The ability to compress video signals helped to solve the problem. The current DBS systems can compress the information of many channels and broadcast them using one transponder. This eliminates the need for one trasnponder per channel and allows the DBS companies to offer a wide variety of programming. This variety was is one of the key selling points for the DBS providers.
Another selling point for the DBS providers has been digital audio. The audio signal that is digitized can be compressed and distributed like all the other data. The DBS providers capitalized on this. The public has accepted digital audio and seems to be aware of the enhanced sound quality that it offers. The signal providers knew that this would enhance the richness of the viewing experience and hoped to gain subscribers.
Widespread compression use is not far off for the cable operators. The compression of channels can be utilized to increase their ability to increase programming as well. All of these systems require sophisticated decompression cards at the receiver’s end. This technology has only become reliable and affordable to the consumer in recent years. It will become more prevalent as it gets cheaper. In many areas the mechanism is already in place. Subscribers have accepted set top boxes in their homes and the implementation of decoding devices will be no more intrusive.
The standards for compression are constantly changing. This is a result of constantly improving schemes as well as distribution of video decompression players. The constant battle is the compromise that must be made between visual quality and available bandwidth. For the entertainment industry the standard that is being accepted is MPEG-2.[6] The Motion Picture Experts Group has created this standard because MPEG-1 did not create a picture of high enough quality. The Direct Broadcast Satellite operators (DBS), Cable Television (CATV), and High Definition Television (HDTV) are or will be using some form of MPEG-2. (Stroud, et al., 1995). Compression can be achieved using hardware or software. The software solutions enables computer users to playback video without expensive upgrades but, the hardware solutions are often faster and more effective. The viewing of video on the Web and LANs (Local Area Networks) reveals the problems of bandwidth limitations. Traditional phone lines can’t yet support the amount of information needed for full frame video at 30 frames per second. For the LAN, the playback is limited by demands created by multiple users but, these limitations are slowly being overcome. (Krill, 1995)
The solutions for video compression are getting better. The current limitations to frame rates and size will be overcome. In addition, the players that are available for the general population are becoming widespread. These players are being written to support any number of compression formats. This is going to create an atmosphere that will allow the transmission and reception of video through computers regardless of the type of computer. Providers will be able to produce a product and know that the masses will be able to view it.
Chapter 4
Cybercasting: the Future for Video and Audio on the Web:
The developments in cybercasting are of great importance to the future of news on the Web. A vast majority of Americans get their news, at least in part, from viewing or listening to a newscast. Viewing a newscast provides the richest sensory experience for an audience. Seeing and hearing the story adds to an individual’s ability to understand and comprehend the information. Radio broadcasters and print journalists may argue that a beautifully prepared story in their respective media carry as much information but, it just isn’t so. This is why the progress in cybercasting is an important step towards the realization of the future of news. It enables information providers to provide video and audio via the Web.
Cybercasting is still an ambiguous term. It is most commonly associated with the process of streaming visual or audio information through cyberspace (another overused ambiguous term). The suffix “casting” insinuates that it is related to the more traditional form of widespread information distribution, broadcasting. In reality, cybercasting is fundamentally different from broadcasting because it not passive. The client must request the information rather than just tune in the broadcast information.
The request for the information is not a difficult process. In fact, it is getting easier and more passive everyday. It takes little effort to trigger the serving of an information stream and designers are always trying to make it easier. The more customized the user’s browsing environment becomes the easier it will be to initiate whatever requested information-streams have been preset.
The phrase that best describes this process of requesting data streams is broadcatching (Brand, 1988). It was coined by Stewart Brand at MIT to explain the server/client communication necessary on the web. The forms of cybercasting that are becoming popular for transmission on the web are beginning be received in a more traditional real time playback format but, they must still be initiated by the client. The fact that the playback and interfaces are getting to be more traditional is good for the user - because we like things to look like what we know.
There is one fundamental aspect that separates these forms of viewing and listening from cybercasting. Whether it is FTP or a browser that gets the files, they must be received and stored, at least temporarily, by the client. Then they can be processed into something useful to the client. Even Java must load all the information it will need to carry out its duties as an applet (a miniature program).
Cybercasting Proper
That which separates cybercasting from the other forms of viewing and listening on the web is the streaming of data. Cybercasting servers serve out signals to client machines that request them. These streams are processed in real time. This means that the data is processed as it is received. The information never has to be stored. The players usually utilize a caching system in order to allow more seamless playback but it is not necessary.
The Client
On the client end, the user, there needs to be a player. This player can be designed to decode video, audio, or both. Many of the players are free to the user and can be downloaded and installed with relative ease. Many of the companies that support cybercasting ventures are currently giving the players away. Their revenues will come from the server end through sales and support of their systems. This is an monumentally important point. The free distribution of these decoders enables a client with a moderately powerful computer system to view or listen to the streaming data without adding hardware to their system. As computer sales continue at a strong pace around the world all the that will be needed is a connection to the Web. What will this mean to international flows of information? How will it influence countries that depend on licensing for television revenue if users can watch video for free on the web?
The Server
The servers stream the data to the client upon request. The data is a digitized and compressed video or audio signal that may be a stored file or a real time translation. The compression format used is one that each specific player is designed to interpret. The most popular currently in use is a form of MPEG. A likely reason for stored files to be served is in the case of archived material. The Internet Multicasting Service is good example of this. They use a player called Real Audio to serve a weekly interview program. They now support Xing Technologies Streamworks player as well.
The serving of real time audio or video is much more complex process than receiving. It requires the encoding and compression of the information instantaneously. This requires expensive hardware in addition to the serving software. Video encoding requires a great deal of information in comparison to audio so it suffers more degradation with the current schemes. Real time audio encoding is getting very good with little loss of quality and greatly reduced bandwidth requirements.
The Players
Audio
There are several players available and the numbers are growing. Those companies that currently have players and servers are constantly updating them every time they find a more efficient solution. The updates can definitely show marked improvement.
The Streamworks system made great improvements with their last update.[7] The real time audio from radio stations like WXYC come through with relatively few dropouts and at very low bandwidth. If bandwidth isn’t a consideration they can serve great stereo music sampled at 44Khz (that’s CD sampling rate), again with little dropout.
Real Audio is a another player that claims it can play streams back at quality equal to FM radio signals.[8] It is a scaleable system that allows the server to adjust for the available bandwidth. The Streamworks server can do this as well.
There are a number streaming solutions that play audio that is embedded into web pages. This is for compressed audio files that the page designer has links to in their page. Truespeech is one of these players that is meant to be used as an alternative to putting traditional audio files in a web page.[9] The page will link to the Truespeech file and the clients browser spawns the player that decodes the file as it comes in. ToolVox and Internet Wave are two more solutions that work the same way.[10] ToolVox can launch the application immediately upon opening the web page.[11] This means that the page designer can now incorporate un-requested audio information in the basic page. As long as the client has this player installed to work with their browser they don’t have to do anything besides go to the page. This had been possible using Java applets but, this information had to be loaded to the client’s machine and not streamed in real time.
Video
Video streaming on the web has a lot of improvement to make. There have been tremendous improvements but, picture quality, size, and frame rate are still inadequate. The amount of information that must be encoded from each frame of video creates a great deal of difficulty. The compression process includes a number of tricks in order to reduce the amount of information that is streamed. The image size is reduced and the number of frames per second are reduced. This makes the video small and jumpy. Combined with the blockiness caused by the compression algorithm, the video has a way to go.
There are a number of companies that are currently providing video streaming servers and players on the web. The first was mentioned above from Xing Technologies. Streamworks will also stream video but not as nicely as the audio. The second video streaming company is VDOnet with their player called VDOLive.[12] VDOLive looks like the best at the low bandwidth video transmission game. They have the problems listed above but, improved compression, increased computation speeds, and increased bandwidth will all help this situation improve quickly.
PreVu is a real time decoder for MPEG files that works like the devices that are designed to interpret data that is streamed at specific rates.[13] It simply decodes the information as it comes. At present, it doesn’t handle audio but that isn’t far off. This is an effective idea that makes browsing pages with embedded movies or audio more interesting. Now the client doesn’t have to wait for the whole file to arrive before they see or here something. The limitation is that the rate that the data is received cannot be controlled effectively and the playback can, and usually is, choppy and inconsistent.
The Bottom Line On Cybercasting
In spite of its current shortcomings, the potential for cybercasting is enormous. It already lets parties communicate orally from anywhere around the world. It allows users to listen to high quality audio from anywhere. And it allows us to see video images - not very good, but understandable - from anywhere. Cybercasting has overcome the dilemma of storage limitations through real time players. The user no longer has to store the file so the server can retain the quality that is desired. Combined with more effective streaming methods, users can enjoy beautifully clean, stereo audio with little drop out.
The issue of bandwidth is still a limiting factor and is now the primary limiting factor. The bandwidth issue is being attacked in a number of ways that have allowed for impressive progress. The new hardware and software compression/decompression schemes are becoming more efficient, reducing the need for bandwidth. The amount of bandwidth that is physically available is constantly growing and will substantially increase for the general public in the near future. All of this makes for an exciting future for cybercasting. The future for cybercasting has the potential to be great. Whether it can fulfill that promise will depend on many factors; regulation, bandwidth, dissemination, and the capability for upstream communication. These issues will dictate whether cybercasting is a triumph for world communication and tool that empowers individuals, or just a new way to catch a “Happy Days” rerun.
Chapter 5
Networked Virtual Reality
Many of the early advances in VR technologies were designed for single user functionality. In addition, they were designed for a specific platform. Much of this was do to the vast amount of data that has to be processed in virtual environments. This made VR a very limited technology for multiple user environments. Fortunately, some VR technologies are now being designed to facilitate multiple users in a networked system. The first steps have been made in the sharing of 3D environments over wide area networks. Facilitating this progress is the establishment of standards that allow cross platform interoperability. These advances will allow VR to play a more crucial role in communications and information dissemination.
The networking of 3D environments has become an important issue for VR developers. Distributed VR will help it achieve its potential as a communications medium. The creation of the World Wide Web focused attention on the need for networked VR on wide area networks. An essential step in this process was the creation of a standard protocol for the dissemination of 3D environments. Once the standard was established many different developers could contribute content and design virtual environments.
VRML
The standard that was adopted is the Virtual Reality Markup Language. VRML was conceived in the spring of 1994 at the first annual World Wide Web Conference in Geneva, Switzerland (Bell, et. al., 1995). A special interest group was set up to discuss the creation of a standard for 3D design on the Web and a mailing list became the forum for discussion.[14] From these humble beginnings the specifications for VRML 1.0 were set and now the specifications for 2.0 are in the works.[15]
The emergence of VRML was accompanied by speculation and enthusiastic predictions about its potential for distributed VR on the Web (Magdid, et. al., 1995, p. 502). The immediate reality was that it remained a relatively exclusive technology because of its demands for bandwidth and powerful computers. Now there are many VRML viewers that can easily be incorporated with common web browsers. In addition, the constant increases in processor speeds and memory in personal computers are making widespread use of VRML a reality.
VRML is designed to be a 3D environment that allows users to navigate through a scene. “It is based on the ASCII format of Silicon Graphics' Open Inventor language”.[16] Silicon Graphics’ was one of the most successful companies in 3D design and their format was well accepted by those suggesting standards. Certain areas within the scene can be clicked on by the user and they will hyperlink to other world wide web documents. These hyperlink areas are preset by the designer and allow the VRML scene to utilize this key benefit of the web protocol in a 3D environment.
An important feature is its ability to be used on multiple platforms. These features add to the value of VRML and may help it to become a widespread form of information dissemination. One of the major shortcomings of VRML is its inability to allow interaction among multiple users. The current level of interactivity is limited to the users’ ability to navigate through a scene but, there are other forms of networked VR on the web that can facilitate user interaction. There is a company that is designing software that will enable users to communicate as they move through a virtual environment in what they call VRML+.[17]
Interactive Networked Environments
User interaction is major issue for communication researchers because it involves two-way communication amongst users. An intriguing approach to virtual environment interaction is Worlds Chat.[18] This interactive, distributed, VR is based on the early Internet chat sessions called MUD’s (Multiple User Dungeons). MUD’s are areas where users interact in real-time, in textually based imaginary scenes. Worlds Chat utilizes a 3D environment in which the users are represented by an object of their choice. These objects, or “avatars,” are available at the site or personal designs can be imported.
Worlds Chat deals with the limitations of networked VR by having the information about the environment on each user’s computer. The host machine then only needs to send information updating the movements of the avatars. The communication between each user has a long way to go. Currently, it is limited to textual exchanges in balloons that appears above each avatar. The Worlds, Inc. proposed VRML+ platform will work in the same manner but will be based on a VRML standard.
The bandwidth intensive interactive environments on PC’s are made possible because most of the information is on each user’s computer. This is the same model the Worlds Chat program uses. Each user must have the software on their computer and the only information transferred is input changes. When the input is received it is immediately updated in the other user’s virtual environment.
Future Developments
The future for networked VR applications in communication is bright. At the moment, the biggest limiting factor is bandwidth and processor speed. Both of these obstacles are rapidly being overcome. In the not too distant future designers will be able to distribute detailed virtual environments with the notion that they will be viewable by many different users on a variety of systems.
This means that we will see more intuitive interfaces in 3D form. More importantly, people will be able to interact in 3D environments that will be more familiar and easier to understand. VR simulations will help information providers relate information and users “experience” situations or events.
The current weaknesses for 3D interaction in chat environments will be eliminated. The addition of moving worlds within the standards for future VRML protocols will help to make the interactive environment more fulfilling. The fundamental change for these will come from addition of users’ audio input. There will be no need for the use of textual interaction. The addition of audio perception devices will enable users to communicate orally in the environment and maintain their spatial perception.
The fulfillment of the promise of VR will come through its ability to be disseminated through networks. This is what will make VR an integral part of communications. When augmented with sophisticated sensory activating devices, multiple users will be able to experience things that were not possible before. And then they will be able to share the experiences.
Chapter 6
Bandwidth And The New Telecommunications Solutions
The issue of bandwidth is an important one. In the most basic sense bandwidth can be defined as the amount of information that can moved at one time. A good analogy is that of a funnel. If the funnel has a wide opening at the bottom a lot of water can pass through at once. If you fill the funnel at a rate faster than it can pour out the bottom, then you’ve exceeded the available bandwidth. On the World Wide Web the protocols used to transfer data are able to route the data to its destination even if the bandwidth is exceeded. It does this by holding the data until a route can be found and then it checks to ensure that all of the data arrived.
If the bandwidth is limited than the transmission of data is delayed. This leads to long waits as information is downloaded. In case of cybercasting this becomes a crucial point because the player is interpreting the data stream as is receives it. If the information is delayed the playback will skip over the lost data packet. Thus the lower bandwidth connection limits the capability to present information.
This highlights the importance of high bandwidth connections to the Web. Thanks to the passage of new legislation this is quickly becoming a reality for home users. Traditionally the way home users connected to the Web was via twisted copper phone lines. As modems have gotten faster the speed of transmission has increased will ultimately reach a bandwidth limit in the near future. The next step in the battle for bandwidth was the move toward ISDN. The Integrated Services Digital Network allows phone service providers to provide higher bandwidth connections to those that are willing to pay. It requires special phone lines and modems to operate.[19] The ISDN systems transmit data more reliably and efficiently that traditional modems and phone lines.(Malkin, 1994) Unfortunately, many systems are just beginning their service to many communities and informed personnel are often hard to find.
The good news is that cable companies have been experimenting with cable modems. Coaxial cable has a significantly higher bandwidth than twisted copper. Connecting to the Internet using cable modems is a significant step in the progression to a new news media. This high bandwidth connection will be able to facilitate a vast amount of information. The initial trials have been promising. It seems to be a natural progression that will be welcomed by the information hungry. Most of the United States is wired for cable so that part of the equation should be relatively easy (McCrystal, 1994). Many problems will still need to be sorted out because the original design of cable systems was to provide a one-way stream of information. If the promise of the Web is to be realized in full it will require a two way stream of communication. These problems will be overcome as consumers and the market dictate.
A major obstacle for the cable web connections was recently eliminated by the U.S. government. The 1996 Telecommunication Bill is now law and will allow open competition between phone and cable providers. The bill has been floating around Washington since the 1980's as a Republican open market idea. Before this bill was passed it would have been very difficult for the cable operators to enter the Internet service provider market. The result of the bill is going to be a renewed focus on new communications markets that were previously stagnant. The mergers and acquisitions are going to fly as the battle to offer high bandwidth connectivity heats up.
Now there is renewed interest and a great deal of capital going towards the creation of these high bandwidth home connections. The amount of information and the speed at which it will travel is growing. All of these developments are leading to a new news media. A media that can incorporate richer, more detailed information and facilitate two way communication.
There are already at least 9 companies producing cable modems and many cable operators have committed to purchases.[20] TCI cable has is running one of the biggest trials in Sunnyvale, CA. Its @Home program has been received substantial backing and success.[21] Time Warner’s Cable, Continental Cable, Comcast, and Cox cable are all upgrading there systems and testing their abilities to handle two-way traffic.[22] By the end of 1997 select communities around the United States will be able to achieve high bandwidth connection to the web via reliable cable modem service. Once the first systems are established and standards are created the conversion of all cable systems will be quick to follow.
Conspicuously absent from this idealistic look at the future of media is the Direct Broadcast Satellite operators. Their offering is targeted at those that were not satisfied with cable operators or not yet wired for cable. One of their key selling points is the ability to send more information at better quality, thus providing a richer media experience. DBS providers are also prepared for the release of HDTV. This is logical move and has proven relatively successful as the number of DBS customers continues to grow. Unfortunately for DBS, the future media may be detrimental to their success. Like cable operators, the DBS system is designed for one way distribution. The dilemma is that the current architecture cannot facilitate the two-way interaction that is so critical to the future of media. This puts cable operators at a tremendous advantage as they can more easily update their systems to facilitate phone service and web access in addition to their traditional services.
In the shifting telecommunications markets DBS will have to come up with creative solutions to the two-way communication dilemma. Currently the operators have a very primitive two-way system in place for the Pay Per View offerings. The customer simply calls the order in via traditional phone lines. This system will not be sufficient for the immediacy requirements of the future media offerings. This may be overcome if the DBS providers aggressively seek to create an ambitious integrated communications system to enhance the upstream capacity. The lifting of the telecommunications restriction makes this seem more plausible but there is another major barrier that supersedes all others. The nature of the Webbed World is a networked system connected via routers and served by servers. The routers are an essential tool in routing requested information to its destination. This is individualized, requested information that travels to a client. That which makes the DBS system cost effective for today’s media is what will make it unable to provide the future media experience. The satellite transmissions allow the dissemination of information over a vast area to a mass audience. They cannot however, provide the personalized, individualized, immediately requested information that will be the culmination of the new information media paradigm.
DBS providers have made their first steps to get into the next level of information providing. DirecTV, the leading high-power direct broadcast satellite (DBS) service in the U.S., and Microsoft signed an agreement to include the necessary software to enable PC’s to receive video programming. In addition, computers will be able to receive additional multimedia information. “The high-speed broadcast transmission capability of DirecTV will provide scheduled delivery of multimedia information containing elaborate graphics, audio and digital video as well as information and software code”[23] The key word here is scheduled, not requested.
Chapter 7
“NetroNews:” New Journalism in the Webbed World:
"Many people think of newspapers as having more depth than television news. Must that be so? Similarly, television is considered a richer sensory experience than what newspapers can deliver. Must that be so?" (Negroponte, 1995, p. 20)
Why are all of these technological advances significant? Haven’t we seen many a new technology introduced with a great deal of hype and fanfare only to be quickly forgotten? These advances are important because they are part of a worldwide revolution in how people are going to disseminate and receive information. The way that people learn about the world around them and the events that are important to them is changing. The role of traditional media in the is now going to have to give way to the new paradigm. All of the different media sources will be available from one mechanism and the sensory experience they each provide we overwhelmed by a new, richer experience.
We will call this new phenomena “Netronews.” Netronews will embody many of the desirable aspects of conventional news and offer much more. There can be little argument that most people would prefer a multimedia experience to a purely textual one when seeking information. The saying, “a picture speaks a thousand words” has its roots in this idea. The increasing move to color photographs in newspapers is indicative of the consumers attraction to visuals (as well as better printing techniques).
When augmented with audio, the experience is further enhanced. Why do people turn their on their televisions when they learn that there is a major news event happening? They seek the immediacy as well as the richness of the information experience provided by TV. After hearing an announcement of a terrorist bombing, one doesn’t run into the house to hear about it on the radio. People want to see and hear the information so they can better understand it. This is why the Web, in conjunction with new technologies, change the role of news providers and cybercasting will become an essential part of the Netronews package.
The line between traditional television news providers and newspapers will become blurred. The number of newspapers that are online is growing rapidly. Once the technology is in place these news providers will be able to provide satellite weather photos, video of the days events , movie clips, and more. This blurring of services will fill a need for many people that would like an in-depth and sensory rich news experience at their convenience. Simply open the location with the Web browser and read/view whatever stories you want to. These experiences will not be limited to basic presentation of little windows of video of choppy sound bites. The new experience is going to be far more satisfying than television or radio can offer.
The Netronews provider is going to need to provide a diverse array of content. The in depth textual backgrounds for the brief video summaries that the client requests. The video content will be provided by the video news services that are already in the business of creating video. This need for content is going to lead to a great deal of cross ownership in different media markets. If the current laws that restrict this sort of cross ownership aren’t eliminated it will create an unprecedented distribution of media through exclusive offering deals among the conventional media providers. This of sort special dealing is seen in some of the exclusivity rights that many of the online providers tried to establish with different print media, particularly magazines.
The argument for the immediacy of television will be made irrelevant by the Netronews provider that can give constantly updated, in-depth coverage. Data compression will allow these information providers to provide visual and audio information with the stories. With the powerful hardware encoding devices now available video and audio can be encoded in real time providing live streaming of a “broadcast.” An example of the value of the immediacy that online newspapers now enjoy is in the event of local school closings. It seems insignificant but, when an area is experiencing inclement weather the online paper can list the school closings. Now the young masses can click on the list of closings and know immediately if they have school. This makes the information available immediately upon the request of the client. This is one of the great features of Netronews. This principal will be applied all aspects of the new media.
Negroponte discussed the personalized electronic newspaper of the future. It would know what layout the client wanted and would appear in the desired format upon download to the reading device. This is an important part of the Netronews offering. The client will decide between a passive or an active news receiving experience. In addition, they must have their news viewer preferences set for video, audio, spoken text, or three dimensional representation. Of course there will be a default news summary package for those that want to enjoy a passive news experience. This passive news experience may stir the user because of a tragic crime story. Now the Netronews client can - if still connected to the serving source via a traditional cable method or some new airborne device - investigate this issue further. They can rifle through old stories and watch previous video clips on the story served on demand. Then virtually walk through a virtual reality representation of the crime scene to gain a better understanding of the events.
This scenario involves a fundamental technology that has never before been a major part of the news experience in America, a immediate two-way communication interaction. The server for the Netronews provider can respond instantly to a request for supplemental information on a particular story. No longer is the story limited to the space available on the newsprint of time left in the newscast. The view/reader can look through the archives of formerly reported information. The Netronews provider can also better serve its clients by keeping track of the type of stories that its viewer/readers seem to read most often. In the “response room” clients can discuss the stories in a situation that resembles an Internet chat for those that prefer textual exchanges. For the more visually inclined the three dimensional, audio “response room” would allow the anonymous users to voice their opinions while seen by other users as a figure or character of their choosing. For the more confident the respondents the sounding board is the place to look at a video image of all the participants as you debate the days news. The possibilities for quick reader/viewer response polls are obvious.
The Netronews will be an interactive experience. The experience is going to be a sensory rich environment. Not an interactive clicking of buttons in a two dimensional environment but, much more. The advent of widely distributed virtual environments will allow the users to better understand the news event. News stories about crimes will allow the user to move through a three dimensional depiction of the crime scene. Or the user can navigate a three dimensional map of rain forest destruction.
A technology that give a basic idea of these possibilities is VRML but, Netronews will offer much more. Advances in distributes virtual environments will include three dimensional audio. The user will better understand the environment with the inclusion of audio perception techniques. These realistic representations will help the “designers” of the news story report the story.
This scenario creates a dilemma for the Netronews provider. The role of the editor will be redefined to that of a facilitator or a mediator. There will be no scarcity of space on the pages. The need to cut stories will be eliminated. The editor will be more of an organizer of the information, deciding what layer of electronic display mechanism the story will go into. The editor will also need to approve the video feeds that will be provided. This raises an interesting question about the traditional gatekeeping role of editor. The consumer will become the gatekeeper by deciding which information to download or browse. Sure, the editor will be able to decide what to offer on the server but, it would be foolish to not offer as wide a variety as possible so the consumer is pleased with the large selection. The editor will simply offer the "specials of the day" with a carefully organized hypertext list of related video, audio or text stories related to them.
Amidst all of this information someone will have to maintain quality. This will make the role of the media editor a crucial one. This idea was one of the key points made by senior editor for the New York Times, William G. Connolly. During his address at the 1995 National Newspaper Copy Editors Conference he told the editors gathered that in light of all of the changes facing newspapers they were one group that didn’t have to worry about their jobs. It was a wise analysis and a reassuring one for those present. The way that they go about their job may change but they will remain an integral part of a team. This team of information editors will include specialists in the presentation other media as well.
The textual news providers, audio providers, video providers, graphics designers, and virtual environment designers are going to have to work in unison, much like a newspaper reporter calls upon the graphics department to create graphics for a story. The reporters and editors will remain an important part of the news team but, they will share their role with a variety of professionals that will augment their work to meet the demands of the Netronews audience.
The result of the Web and its ability to distribute data will create this new era for information providers. The Netronews media source is going to force change upon the news media but, only at the level that the user will accept it. The changes that come will be tempered by our fondness for experiences and interfaces that we recognize and are comfortable with. The reality is that all those involved with communications will have to adapt to the changing expectations of the audience. This Netronews paradigm will demand two-way interaction and the rich sensory experiences that the World Wide Web will make a reality..
Bibliography
Ashworth, Susan. (1996, February 9). ISDN a Delivery Knockout. TV Technology, p.17.
Brand, Stewart (1987) The Media Lab: Inventing the Future at MIT. New York: Viking Penguin, Inc.: New York.
Burger, Jeff. (1993). The Desktop Multimedia Bible. Reading, Massachusetts: Addison-Wesley Publishing. (p.136).
Dyke, Terrence, and Paul Smolen. Interesting Times for Digital ‘TV’. TV Technology, p.32.
Freed, Ken. (1996, February 9). Cable Modems Take a Broad Leap. TV Technology, pp. 1, 8.
Hawkins, Diana Gagnon. (1995) Virtual Reality and Passive Simulators: The Future of Fun. In Biocca, F. & Levy, M. (eds.) Communication in the Age of Virtual Reality. Hillsdale, NJ: Lawrence Erlbaum Associates, pp. 159-189.
Holsinger, Eric. (1994). How Multimedia Works. Emeryville, CA: Ziff-Davis. (p.110).
Krill, Paul (1995, March 6). Networked Video: NetWare Video to Support MPEG. InfoWorld. p.6.
Krol, Ed (1992). The Whole Internet. Sebstolpol, CA: O’reilly and Associates, Inc.
Magdid, John, Matthews, D., and Paul Jones (1995). The Web Server Book. Chapel Hill, NC: Ventana Press, Inc.
Malkin, Elliot (1994). Integrated Services Digital Network. In Grant, A., E.,(ed). Communication Technology Update (3rd ed.). Boston, MA: Butterworth-Heinemann. (p.336-344)
McCrystal, Skip (ed). (1994). Cable Television. In Grant, A., E.,(ed). Communication Technology Update (3rd ed.). Boston, MA: Butterworth-Heinemann. (p. 18-27)
Nash, Kim S. Interfaces of the Future. Computerworld, p.83-84.
Negroponte, Nicholas. (1995). Being Digital. New York: Alfred A. Knoff.
Paulsen, Karl. Servers Found on the Video Menu. TV Technology, p. 44-45, 65.
Pesce, Mark D. (1995) VRML Hypermail Archive: EVENT: Keynote Address to WWW '95. .
Prater, Scott G. (1994). Local and Wide Area Networks. In Grant, A., E.,(ed). Communication Technology Update (3rd ed.). Boston, MA: Butterworth-Heinemann. (p. 147-157).
Reveaux, Tony. (1996, February 23) SGI VRML Carves a 3D Web Space. TV Technology, p. 13.
Van Tassel, J, Ph.D. (1994). Digital Video Production. In Grant, A., E.,(ed). Communication Technology Update (3rd ed.). Boston, MA: Butterworth-Heinemann. (p. 9-17).
On The Web
About ISDN “”
@Home “”
Bell, Gavin, Parisi, Anthony, and Mark Pesce. (1995) VRML 1.0 Specification. Statement.
Borenstein, Nathaniel S. Internet Multimedia Mail. “:.
Breznick, Alan. (1996, February 26). New Media Rebuilds. Cable World, “”
Browser Watch. “”
Cable Modem Resources on the Web. “”
Cable Modem Trials “”
pression: Frequently Asked Questions (part2/3) “"
DSP Group, Inc. “”
European Laboratory for Particle Physics. “”
Hardenbergh, Jan C. (1996) VRML FAQ. .
Internet Engineering Task Force. “”
Internet Multicasting Service. “”
Internet Phone. “”
Internet Society. “”
Internet Wave “”
Java Pages at SunSITE. “”
Macromedia. “”
MBONE. “”
MCAST/MBONE FAQ. “”
MIME (Multipurpose Internet Mail Extensions). “”.
MPEG Plaza - Front Page. “”
Netscape “”
Netscape and Insoft. “”
Quicktime. “”
Real Audio. “”
SunSITE. “”
Technologies. “"
ToolVox “”
TrueSpeech Internet Player. “”.
VDOLive Video Player. “”
Video Encoding Standards. “”
Video Webcasting. “”
VRML.
VRML+. (1995) .
VRML Hypermail Archive. (1995) .
VRML 2.0 Information. .
W3. “”
Worlds Chat. (1995) .
WWW-Talk and WWW-HTML Mail Archives “”
WXYC. “”.
Xing Technology Corporation. “”
-----------------------
[1] Internet Society: What is the Internet, FAQ
[2] WWW-Talk and WWW-HTML Mail Archives
[3] MIME: Multi Media Internet Mail Extensions
[4] Browser Watch
[5] Netscape’s Currently Shipping Plugins
[6] Comp. Compression FAQ
[7] Xing Technology Corporation
[8] Real Audio
[9] DSP Group, Inc.
[10] Internet Wave
[11] ToolVox
[12] VDOLive
[13] PreVU
[14] VRML Hypermail Archive. (1995) .
[15] VRML 2.0 Information. .
[16] Pesce, Mark D. (1995) VRML Hypermail Archive: EVENT: Keynote Address to WWW '95. .
[17] VRML+. (1995) .
[18] Worlds Chat. (1995) .
[19] About ISDN
[20] Cable Modem Trials
[21] @Home
[22] Breznick, Alan. (1996, February 26). New Media Rebuilds. Cable World,
[23] DirecTV, Microsoft In Pact (1996, March 11). Yahoo: Reuter’s New Media. ................
................
In order to avoid copyright disputes, this page is only a partial summary.
To fulfill the demand for quickly locating and searching documents.
It is intelligent file search solution for home and business.
Related searches
- traveling the world for a year
- starting a digital media company
- what is the new world order 2020
- who is in the new world order
- explain the new and the old testament
- when will the new world order begin
- members of the new world order
- new e and m for 2021
- the new world order conspiracy
- the new world order conspiracy summary
- the new world order theory
- who is behind the new world order