SOFT COMPUTING SN DEEPA PDF

adminComment(0)

Principles of soft computing by Sivanandam and Deepa second edition here is the link to the required PDF enjoy: freemindakebe.ga Edit 1:do. Soft computing freemindakebe.ga - TEXT BOOKS 1. Principlesof Soft Computing by S. N. Sivanandam and S. N. Deepa, Wiley India Edition. 2. PDF | soft computing fundamentals | ResearchGate, the professional network for Download full-text PDF. Soft .. Sivanandan, S.N. Deepa.


Soft Computing Sn Deepa Pdf

Author:SERITA LEWERKE
Language:English, Indonesian, Portuguese
Country:Greece
Genre:Biography
Pages:792
Published (Last):07.02.2016
ISBN:869-8-47755-206-5
ePub File Size:21.69 MB
PDF File Size:10.70 MB
Distribution:Free* [*Register to download]
Downloads:42492
Uploaded by: DAWNE

wiley, In this book the basic concepts of soft computing are dealt in detail with the relevant information and By (author) S.N. SIVANANDAM, S.N. DEEPA. This book contains the course content for Soft Computing. First Edition info/freemindakebe.ga> [Accessed 18 July ]. Bower, A. ,. •. SOFT Sivanandam, N. S. & Deepa, N. S., •. Introduction to Genetic. Chapter 1 gives an introduction to Evolutionary computing, its development and S.N. Deepa has completed her B.E Degree from Government College of.

It is usually called as M-P neuron. Activation of unit Yj. Threshold for activation of neuron YjS: Training input vector. The-notations mentioned in this section have been used in this textbook for explaining each network. XiWij y. Training ourput vector. I hus. Wij old a: Learning rate. Weight on connection from unit X. J w11 I: This is generally done in the back propagation network.

The bias can be of two types: Input ' The learning rate is denoted by "a. Consider an equation of straight line.! For each and every apPlicauon. Activation of unit Xi. The learning rate. The threshold vafue is used in me activation function. If momentum has to be used. The necessity of the linear separability concept was felt to classify the patterns based upon their output responses. The linear separability of the nerwork is based on the decision-boundary line.

Generally the net input cU'Iau: The decision line may also be called as the decision-making line or decision-support line or linear-separable line. The threshold plays a major role in M-P neuron: There is a fiXed threshold for each neuron.

Pins neuron model. In Figure If there exist weights with bias for which the training input vectors having positive correct: Js "linearly separable. An Introduction excitatory connected weights entering into a particular neuron will have same weights. Here the weights of the neuron are set along with the threshold to make the neuron "perform a simple logic functiofk-Xhe-M J?.

Linear separability. A decision line is drawn tO separate positive and negative responses. Since the firing of ilie output neuron is based upon the threshold. On the basis of the number of input units in the network.. The net input for the ne[Work shown in Figure 2-l9 is given as y This region may be called as decision boundary and can be determined by the relation For inhibition to be absolute. As already discussed.

An analysis has to be performed m determine the values of the weights and the ili. I Step 0: Positive response region Negalive response region -x. Consider a network having positive response in the first quadram and negative response in all other quadrants AND function with either binary or bipolar data. In Hebb learning. The notations used in the flowchart have already been discussed in Section 2. A training pair in which both ilie input unit and the target value are "off.

For this correct response. In certain situations. During training process. The flowchart for the training algorithm ofHebb ne[Work is given in Figure According to the Hebb rule. First initialize ilie weights. IE tS stopped. WI --Xl-- w. Here the learning signal is equal tO the neuron's output. An Introduction x. It may be noted that the bipolar reoresenta'tion is bener than the Using bipolar data can be represented by ues are represeru.

Till iliere exists a pair of training input and target output. If binary data is used. I Decision line 2. If weight WJ. Let us understand it. This is depicred in Figure Donald Hebb stated in that in the brain. A training pair in which an input unir is "on" and target value is "off.

The given net consistS of two input neurons. Output umts activations are set: For No each s: Derails are provided for the effective training of a Hebb network.

An Introduction 33 2. Calculate the ner input for the network shown in Figure 2 with bias included in the network.

The inputs and 2. Various activation functions and different types oflayered connections are also considered here. For the network shown in Figure I. The concept of linear separability is discussed and illustrated with suitable examples..

A brief description on McCulloch-Pius neuron model is provided. A detailed comparison between biological neuron and artificial neuron has been included to enable the reader understand dte basic difference between them. The given neural net consists of three input neurons and one output neuron.

An ANN is constructed with few basic building blocks. The basic terminologies of ANN are discussed with their typical values. Concepts of supervised learning. D7 i Step 4: Weight adjustments and bias adjtdtments are performed: The building blocks are based on dte models of artificial neurons and dte topology of few basic structures. The inputs are. Input units acrivations are ser.. The net Qn be represented as shown in Figure 5. The network architecture is shown in Figure For inputs 1.

These form a single-layer network. Figure 3 Neural ner. Obtain rhe output of the neuron Y for the network shown in Figure 3 using activation func- x.. Assume thac both weights excitatory. An Introduction 34 35 2. For this condition. Wiili chese assumed weights. Consider the truth table for AND function Table The given nerwork has three input neurons with bias and one output neuron.. For all ocher input variations.

X2 y 1 1 1 0 0 0 1 0 0 0 0 In McCulloch-Pires neuron. The weights have to bedecidedonlyafi: For the inputs y -. J 21' Figure 6 Neural net for XOR function ilie weights shown are obtained after analysis.

Assume both weights as excitatory. The truth rable for this function is shown in Table 6. An intermediate layer is necessary. The truth table for function Z2 is shown in Table 5.

Proceedings of the 26th Academic Council held on 18.5.2012

The net representation is given as follows: On the basis of this calculated net input For inputs. I for the zl neuron TableS -1 X Thus.. An Introduction X. Now calculate the net inputs. The rrut: A single-layer net is not sufficient to represent the 1. Assume both weighrs as excitatory. Figure 7 Neural net for Z 1. For rhe rest it is "OFF. Case 2: Assume one weight as excitatory and the where Z!

The trmh table for XOR function is given 0. For inputs Nou: For the inputs 0. Table9 1.

9788126527410 – PRINCIPLES OF SOFT COMPUTING, 2ND ED by S.N. DEEPA S.N. SIVANANDAM

Thus for XOR function. Yl and. Using the linear separability concept. For the first input [1 I 1. Table 7 is the truth table for OR function with bipolar inputs and targets. These weights are used as rhe initial weights when the second input pattern is presented.

An lntn: The weight change here is [wi w. If output is 1. Setting the initial weights as old weights and applying the Hebb rule. The weight change here is t Table 10 shows the values of weights for all inputs. The training data for the AND function is given in Table 9. Calculating the net input and output of OR function on the basis of these weights and bias. Weight changes -I -1 1 The nerwork can be represented as shown in Figure But the boundary line is same for the both third and fourth training pairs..

An Introduction Design a Hebb net to implement OR function consider bipolar inputs and targets. The training pair for the OR function is given in Table I ] for which the output response is "1" lie on one side of the boundary. I from I. The separating line equation is y b -wl X.. J-" x. The training patterns for an XOR function are shown in Table When rhe second input pattern is presented Theinputparrerns [ I.

Table 12 shows the weights calculated for all the inputS. Initially the weights and bias are set to zero. Table 11 Finally. By presenting all the input patterns. The weighrs are considered as final weights if the boundary line obtained from these weights separates the positive response region and negative response region.

The training input patterns for the given net Figure 16 are indicated in Table The training input patterns for Figure 18 are given in Table The graph shown in Figure 15 indicates that the four input pairs that are present cannot be divided by a single line m separate them into two regions. The pattern is shown as 3 x 3 matrix form in the squares. Figure 15 shows that the input patterns are linearly non-separable.

In this case also. Figure 15 Decision boundary for XOR function. Using the Hebb rule. Now we present the second input pattern 0. Setting the old weights as the initial weights here. Thus XORfi. Find the weights required to perform the following classifications of given input patterns using the Hebb rule. WG new Set the initial weights and bias tO zero. Presenting first input panern I. This method of solving will result in rwo decision boundary lines for separating positive and negative regions ofXOR function.

Set rhe initial weights and bias to zero. Table 17 x.. Indicate the difference between excitatory and ii OR x. For the neP. State the role of vigilance parameter iE: What is the mher name for weight?

How does a momentum factor make faster Figure 21 Neural net. Calculate the output of neuron Y for the net List the commonly used accivation functions. What is the influence of a linear equation over the net input calculation? What is me impact of weight in an anifidal neural network? Classify the input panerns shown in Figure 22 using Hebb training algorithm.

Discuss in derail ilie historical development of artificial neural networks. Design neural networks wiili only one M-P neuron that implements the three basic logic operations: Draw a simple artificial neuron and discuss dte I 2.

Take a pair of letters or numerals of your own. X2h inhibitory weighted interconnections. Use binary and bipolar sigmoidal activation functions. Using Hebb rule. Write a computer program to train a Madaline to perform AND function. Why is the McCu! What are the basic models of an artificial neural network? Define an artificial neural network. What is the necessity of activation function?

Define learning. Perform the classification using bipolar data as well as binary dara. Srate ilie characteristics of an artificial neural network. Stare the uaining algorithm used for the Hebb nerwork. The vecrors 1 -1 1 -1 and belong to class target. How many signals can be sent by a neuron at a particular rime instant? In what ways is bipolar representation better rhan binary representation?

How can the equation ofa straight line be formed Compare and contrast biological neuron and artificial neuron. What is a learning rate parameter? Define bias and threshold.

Also using each of training xvecmrs as input. List the main components of ilie biological neuron. Write a program to classify ilie letters and numerals using Hebb learning rule.

Define linear separability. Differentiate beP. Srate ilie properties of the processing element of an artificial neural network. Design a Hebb net to implement logical AND function with a binary inputs and targets and b binary inputs and bipolar targets. The key points to be noted in a perccptron necwork are: The input-output data are obtained by varying inpuc variables xt. The basic networks in supervised learning. Also the output dara are normalized within [ Wavelet and Tree Neural Networks.

Delta rule with single output unit. Original percepuon layer description. Difference between back-propagation and RBF networks The various learning facrors used in BPN. The following topics have been discussed in derail. The perceptron network consists of three units. Xz within [ Apply training ro find proper weights in the network..

Function Link. These detectors provide a bif!. The associator unit is found to conSISt of a set ofsubcircuits called atrtre predicates.

The weight updacion in case of perceprron learning is as shown. The perceptron learning rule IS exp rune as o ows: The perceptron learning rule is used in the weight updation between the associamr unit and the response unit.

The weights on the connections from the units that send the nonzero signal will get adjusted suitably. The binary activation function is used in sensory unit and associator unit.

The binary step wiili fixed threshold 9 is used as activation for associator. The feature predicates are hard-wired to detect the specific fearure of a pattern and are e "valent to the feature detectors. In the above equations. The response unit has an'activarion of l.

For each training input. The sensory units are connected to associamr units with fixed weights having values 1. For a particular fearure. It can be found that the results from the predicate units are also binary 0 1. The last unit. The weights pr tin the input layers are all fixed. The ourput ''y" is obtained on the basis of the net input calculated and activation function being applied over the net input. Then e net input is calculated. The loop is terminated if there is no change in weight.

The flowchan depicted here presents the flow of the training process.. As depicted in the flowchart.. The output of the network is obtained by app1ying the. Here only the weights be[l. The inpur-layer and outputlayer neurons are connected through a directed communication link.

The goal of the perceptron net is to classify theJ! The nerwork has to be suitably trained to obtain the response. In the algorithm discussed below. Apply activation. The entire loop of the training process continues unril the training input pair is presented to rhe network.

Text books 1 principles of soft computing s n

Step 1: For each input vector X to be classified. To do so.

If this condition is not met.. Step 6: Train the nerwork until diere is no weight change. Step 2: Perform Steps for each training pair indicated by s: Set activations of the input unit. This is the stopping condition for the network. Then apply activations over the net input calculated to obmin the output: B ify. I StepO: Check for stopping c?

Calculate the output of the nwvork.. The input layer containing input units is applied with identity activation functions: For efficient performance of the network.. Initialize the weights.

Make adjustment in weights and bias for j ify.. The algorithm of a percepuon network is as follows: Perform Steps for each bipolar or binary training vector pair s: The entire neMork is trained based on the mentioned stopping criterion. Perform Steps until the final stopping condition is false. Step 3. For simplicity a is set to 1. Weight and bias adjustment: Compare ilie value of the actual calculated output and desired target output.

Supervised Learning Network 54 55 3. Test for the stopping condition. The testing algorithm application procedure is as follows: I I Step 0: The delta rule updates the weights between the connections so as w minimize the difference between the net input ro the output unit and the target value. The major aim is to minimize the error over all training parrerns. The basic Adaline model consists of trainable weights. The bias in Adaline acts like an adjustable weighr. The Adaline nerwork may be trained using delta rule.

This is done by reducing the error for each pattern. This gives a picrorial representation of the network training. On the basis of the error Factor. Then the net input is calculated. That is. The Adaline model compares the actual output with the target output and on the basis of the training algorithm..

Step 3: The weights be. The weights and other required parameters are initialized. The conditions necessary for weight adjustments have co be checked carefully. Adaline uses bipolar activation for its input signals and its target output.. Yin x.. If the highest weight change rhat occurred during training is smaller than a specified tolerance ilien stop ilie uaining process..

Seep 4: Calculate the net input to the output unit. The range of learning rate Can be be[Ween 0. Yes I Perform Steps when stopping condition is false.. Weight updation w.. This is the rest for stopping condition of a network.. Yinl x. Set the learning rate parameter ct. Step 0: Weights and bias are set to some random values bur not zero. Calculate the net input to rhe output unit: The weights VI. The weights that are connected from the Adaline layer to ilie Madaline layer are fixed.

Text books 1 principles of soft computing s n

The Adaline layer is present between the input layer and the Madaline output layer. Also set initial learning rate a. Activate input layer units.: The weighrs between rhe input layer and the Adaline layer are adjusted during the training process. The Adaline and Madaline models can be applied effectively in communication systems of adaptive equalizers and adaptive noise cancellation and other cancellation circuits. The weights are obtained from ilie ttaining algorithm.

On using this rule.

J J Step 1: Perform Steps for each bipolar input vecror x. Set initial small random values for Adaline weights. Hence rough sets can be used as a framework for data mining especially in the areas of soft computing where exact data is not required and in some areas where approximation data can be of great help.

Rough set theory can be used in different steps in data processing such as computing lower and upper approximation. Neuro-Fuzzy Computing:- Neuro-fuzzy computation is one of the most popular hybridizations widely reported in literature. It comprises a judicious integration of the merits of neural and fuzzy approaches, enabling one to build more intelligent decision- making systems.

This incorporates the generic advantages of artificial neural networks like massive parallelism, robustness, and learning in data-rich environments into the system.

Besides these generic advantages, the neuro-fuzzy approach also provides the corresponding application specific merits as highlighted earlier.

The rule generation aspect of neural networks is utilized to extract more natural rules from fuzzy neural networks.

The fuzzy MLP and fuzzy Kohonen network have been used for linguistic rule generation and inferencing. Here the input, besides being in quantitative, linguistic, or set forms, or a combination of these, can also be incomplete.

The components of the input vector consist of membership values to the overlapping partitions of linguistic properties low, medium, and high corresponding to each input feature. Output decision is provided in terms of class membership values.

Querying the user for unknown input variables that are key to reaching a decision; 3. Conclusion:- The Synergistic combination of data mining methods and soft computing tools like Fuzzy logic, Genetic Algorithms, Neural Networks, Rough sets and their hybridizations can greatly improve the efficiency of data mining methods. The soft computing tools are suitable for solving the problem of data mining because of its characteristics of good robustness, self-organizing, adaptive, parallel processing, distributive storage and high degree of fault tolerance.

Fuzzy sets provide a natural framework for the process in dealing with uncertainty. Neural networks and Rough sets are widely used for classification and rule generation. Genetic Algorithms are involved in various optimization and search processes, like query optimization and template selections.

Other approaches like case-based reasoning and decision trees are also widely used to solve data mining problems. Hence it may be concluded that both paradigms have their own merits and by observing this merits synergistically, these paradigms can be used in a complimentary way for knowledge discovery in databases.

References:- 1. Tickle, R. Andrews, M. Golea, and J. Neural Networks, vol. Fayyad, G. Piatetsky-Shapiro, P. Smyth, and R. Uthurusamy, Eds. ACM, vol. Pal and S. New York: Wiley, Mitra and S. Principles of Soft Computing-S. Sivanandam and S. Grzymala-Busse, R. Swiniarski, N.

Zhong and Z. Dordrecht: Kluwer, Unsupervised Learning Networks? Chapter 6. Special Networks? Chapter 7. Chapter 8. Classical Relations and Fuzzy Relations? Chapter 9.

Membership Functions? Chapter Fuzzy Arithmetic and Fuzzy Measures?When all the four input patterns are presented. Inputs would ndude weather reports from surrounding areas. Input ' The learning rate is denoted by "a. Find the output of the net: Air traffic control could be automated with the location. Adult Only Resorts In Hawaii http: If for any reason your order is not available to ship, you will not be charged.

J Supervised Learning Network 84 For the third input sample.. What is the necessity of activation function? She is currently Assistant Professor, Dept.

STARLA from Vero Beach
Look through my other posts. I absolutely love touring car racing. I love reading books triumphantly.
>