An artificial neural network (ANN), often just called a ...



PROJECT TITLE ON

NEURAL NETWORKS

Abstract:

Neural networks have become one of the most popular models for distributed computation, in particular, and distributed processes in general. They are used for a diversity of purposes and are especially promising for artificial intelligence. Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required.

Supervised neural networks generalize well if there is much less information in the weights than there is in the output vectors of the training cases.   So during learning, it is important to keep the weights simple by penalizing the amount of information they contain.  The amount of information in a weight can   be controlled by adding Gaussian noise and the noise level can be adapted during learning to optimize the trade-off between the expected squared error of the network and the amount of information in the weights. 

This report is an introduction to Artificial Neural Networks. The various types of neural networks are explained and demonstrated, applications of neural networks like ANNs in medicine are described, and a detailed historical background is provided. The connection between the artificial and the real thing is also investigated and explained. Finally, the mathematical models involved are presented and demonstrated.

.DEFINITION:

An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process.

A SIMPLE NEURAL NETWORK

A simple neuron model:

An artificial neuron is a device with many inputs and one output. The neuron has two modes of operation; the training mode and the using mode. In the training mode, the neuron can be trained to fire (or not), for particular input patterns. In the using mode, when a taught input pattern is detected at the input, its associated output becomes the current output. If the input pattern does not belong in the taught list of input patterns, the firing rule is used to determine whether to fire or not.

[pic]

A simple neuron

HUMAN AND ARTIFICIAL NEURONS:

➢ In the human brain, a typical neuron collects signals from others through a host of fine structures called dendrites.

➢ The neuron sends out spikes of electrical activity through a long, thin stand known as an axon, which splits into thousands of branches.

➢ At the end of each branch, a structure called a synapse converts the activity from the axon into electrical effects that inhibit or excite activity from the axon into electrical effects that inhibit or excite activity in the connected neurons.

➢ When a neuron receives excitatory input that is sufficiently large compared with its inhibitory input, it sends a spike of electrical activity down its axon.

➢ Learning occurs by changing the effectiveness of the synapses so that the influence of one neuron on another changes.

[pic]

Components of a neuron

[pic]

An artificial neuron model

HISTORICAL DEVELOPMENT:

Neural network simulations appear to be a recent development. However, this field was established before the advent of computers, and has survived at least one major setback and several eras. Neural networks are also similar to the biological neural networks in the sense that functions are performed collectively and in parallel by the units, rather than there being a clear delineation of subtasks to which various units are assigned.

| | | |

|year |developers |contributions |

| | | |

|1943 |McCulloch and Pitts |Model of neuron |

| | |Logic operations. |

| | |Lack of learning. |

| | | |

|1949 |Hebb |Synaptic modifications |

| | |Hebb`s learning law |

| | | |

|1954 |Minsky |Learning machines |

| | | |

|1958 |Rosenblatt |Preceptron learning and convergence |

| | |Pattern classification |

| | |Linear separability constraint |

| | | |

|1960 |Widrow and Hoff |Adaline-LMS learning |

| | |Adaptive signal processing |

| | | |

|1969 |Minsky and Papert |Preceptron –Multilayer preceptron (MLP) |

| | |Hard problems |

| | |No learning for MLP |

| | | |

|1974 |Werbos |Error back propagation |

| | | |

|1982 |Hopfield |Energy analysis |

| | | |

|1985 |Ackley, Hinton and Sejnowski |Boltzmann machine |

| | | |

|1986 |Rumelhart, Hinton and Williams |Generalized delta rule |

Component based representation of a neural network. In a neural network model, simple nodes (called variously , "PEs" ("processing elements") or "units") are connected together to form a network of nodes — hence the term "neural network”.Neural networks involves a network of simple processing elements (neurons) which can exhibit complex global behavior, determined by the connections between the processing elements and element parameters.

TYPES:

1.Feedforward neural network :

➢ In this network, the information moves in only one direction, forward, from the input nodes, through the hidden nodes and to the output nodes.

➢ There are no cycles or loops in the network.

[pic]

2. Kohonen self-organizing network :

➢ A set of artificial neurons learn to map points in an input space to coordinates in an output space.

➢ The input space can have different dimensions and topology from the output space, and the SOM will attempt to preserve these.

3. Recurrent network :

RNs propagate data from later processing stages to earlier stages.Recurrent networks are classified into three different types.

➢ Simple recurrent network

➢ Hopfield network

➢ Echo state network

➢ .Long short term:

4.Radial basis function (RBF) network :

➢ The Radial Basis Functions are powerful techniques for interpolation in multidimensional space.

➢ A RBF is a function which has built into a distance criterion with respect to a centre.

➢ RBF networks have two layers of processing:

➢ Input is mapped onto each RBF in the 'hidden' layer.

➢ In regression problems the output layer is then a linear combination of hidden layer values representing mean predicted output.

➢ In classification problems the output layer is typically a sigmoid function of a linear combination of hidden layer values, representing a posterior probability.

➢ Performance in both cases is often improved by shrinkage techniques, known as ridge regression .

Architecture of radial basis function network:

The input layer consists of n units which represents the elements of the vector x. The k components of the sum in the definition of f are represented by the units of the hidden layer. The single output neuron gets its input from all hidden neurons.

The output is given by the following equation:-

 [pic]

[pic]

Working Principle:

The principle of radial basis functions derives from the theory of functional approximation. Given N pairs (xi, yi) (x ε Rn, , y ε R), we are looking for a function f of the form:

 [pic]

h is the radial basis function (normally a Gaussian function) and ti are the k centers which have to be selected. The coefficient ci are also unknown at the moment and have to be computed. xi and ti are elements of an n – dimensional vector space.

[pic]

 

The RBF network has the Gaussian function as the activation function of each hidden unit and the output layer performs a linear combination of the localized bumps and is thus able to approximate any function.

When the RBF network is used in classification, the hidden layer performs clustering while the output layer performs classification. The hidden units would have the strongest impulse when the input patterns are closed to the centers of the hidden units and gradually weaker impulse as the input patterns moved away from the centers.  The output layer would linearly combine all the outputs of the hidden layer. Each output node would then give an output value which represents the probability that the input pattern falls under that class.

Advantages of Radial Basis Function Network :

➢ These networks train extremely fast and require fewer training samples

➢ Faster than training the network.

➢ Saves significant time compared to retraining from scratch.

➢ Savings in space and possibly in computation.

Disadvantages of Radial Basis Function Network :

➢ Tuning the various numbers of parameters – radius, centers etc, can get quite complicated.

➢ Choosing the right centers (for the hidden layer) is of critical importance.

➢ Training the supervised layer of the network using gradient descent faced the problem of being stuck in a local minimum.

  Applications of Radial basis function network:

➢ Radial basis functions are used in the area of neural networks where they may be used as a replacement for the sigmoidal hidden layer transfer characteristic in multi-layer perceptrons.

➢ Also used for pattern classification

APPLICATIONS OF NEURAL NETWORKS :

Neural networks are well suited for prediction or forecasting needs including:

➢ sales forecasting

➢ industrial process control

➢ customer research

➢ data validation

➢ risk management

➢ target marketing

PATTERN RECOGNITION:

➢ Pattern recognition can be implemented by using a feed-forward neural network that has been trained accordingly.

➢ The network is trained to associate outputs with input patterns.

➢ When the network is used, it identifies the input pattern and tries to output the associated output pattern

MEDICINE:

➢ Neural networks are ideal in recognizing diseases using scans.

➢ They are also used in ECG for noise filtering.

➢ Neural Networks are used experimentally to model the human cardiovascular system.

➢ Electronic noses: Electronic noses have several potential applications in telemedicine.

➢ Instant physician: The net is presented with an input consisting of a set of symptoms; it will then find the full stored pattern that represents the "best" diagnosis and treatment.

BUSINESS:

There is a strong potential for using neural networks for database mining that is, searching for patterns implicit within the explicitly stored information in databases.

Marketing: Used in Airline Marketing Tactician (a trademark abbreviated as AMT)

Credit evaluation: The HNC neural systems were applied to mortgage screening.

➢ Financial: Used in loan scoring. Also used in delinquency risk assessments.

CONTRIBUTION:

The computing world has a lot to gain from neural networks. Their ability to learn by example makes them very flexible and powerful. Furthermore there is no need to devise an algorithm in order to perform a specific task; i.e. there is no need to understand the internal mechanisms of that task. They are also very well suited for real time systems because of their fast response and computational times which are due to their parallel architecture.

Neural networks also contribute to other areas of research such as neurology and psychology. They are regularly used to model parts of living organisms and to investigate the internal mechanisms of the brain.

Perhaps the most exciting aspect of neural networks is the possibility that some day 'conscious' networks might be produced. There are a number of scientists arguing that consciousness is a 'mechanical' property and that 'conscious' neural networks are a realistic possibility.

CONCLUSION:

Finally, we would like to state that even though neural networks have a huge potential we will only get the best of them when they are integrated with computing, AI, fuzzy logic and related subjects.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download