With the development of computer technology, mathematical and statistical methods can be analyzed with software tools. The application of computational and decision-making capabilities of the human brain to engineering analysis with the model approach started with the development of Artificial Neural Networks. With Artificial Neural Networks, it is possible to create trainable, self-learning, adaptive, decision making structures. Artificial Neural Networks can be applied to almost any discipline and offer very meaningful results. The artificial neural network is an artificial learning method created with inspiration from the biological neural network and whose mathematical expression is compared to biological neuron.
Since artificial neural networks based on biological models, the structure of a natural nerve cell must be understood before understanding artificial neural networks. The figure 1 below shows a biological neuron. The biological neuron consists of a nucleus, soma and two types of appendages. One of them is dendrite, which is short and branched, also receives input information from other cells, and the other is axons, which is the structure that transmits the information in the long cell to other cells, that is, transmits the outputs. The combination of axon and dendrite is called a synapse.
The artificial neural network, the data to be learned in artificial neural networks entered as input. These input data can consist of data such as sound, image, voltage and picture. The artificial neural network performs learning by imitating the event that takes place in nerve cells and reveals the relationships between the cases. Inputs show as X(n) while mathematical modelling of artificial neural networks is shown. Information processing is done when the inputs go to the middle layer.
Inputs are values that are referred to as weight in mathematical modelling and are symbolized as W(n), expressing the effect of the input set or another processing element in a layer before it on this process element. Weights can be positive or negative the input data is multiplied by a weight and summed with the bias value. The weights are randomly selected and may vary in the applied skew.
The inputs are collected by multiplying them by the randomly assigned weight values according to the learning method chosen, as this explanation is equation 3.1. This summing process is symbolized by NET. Addition of NET collection with b symbolized by bias, bias determines the neuron’s response partner addition symbolized by V passes through the activation function and returns the output indicated by y. symbolized activation function as equation 3.5 while the matrix representation of the input information is the column, the weight values are shown in rows in the matrix.
This is seen in equation 3.3 and 3.4. Epoch is the number of times weights are updated for possible inputs.
Many different activation functions can be used in artificial neural networks. Although the activation is not required in the function, it is expected that the output should not be linear, it can be differentiated, it does not have a lower and upper limit, it is monotonous increasing or decreasing, it converges at the origin point. Some examples of activation functions are as follows.
Linear (Purelin) function
This function has net input is produced exactly as output. It is linear, differentiable, has no upper and lower bounds, monotonous increasing and decreasing, converging at the origin point. Below figure 3 shows the linear function and mathematical equation of this function.
Logsig (Sigmoid) function
Looking at the literature, one of the most applied activation functions is the sigmoid function. Sigmoid function is not linear, but modelling can be done by producing balanced outputs for both linear and nonlinear functions, it is differentiable, it has lower limit, upper limit, monotonous increasing and decreasing function. In figure 4 below sigmoid function can be seen. The equation 3.7 is a mathematical representation of the sigmoid function.
Hyperbolic tangent (Tansig) function
This is another most frequently used activation function in ANN. In order to use this function, input values normalized in the range -1 between 1, first, output values are also in this range increasing and decreasing, converges at the point of origin. In figure 5 below hyperbolic tangent (Tansig) function can be seen. The equation 3.8 is a mathematical representation of the tansig function.
Hard sigmoid function
This function has a linear, differentiable, lower limit, upper limit features.
Not linear, derivable, no lower limit, no upper limit, monotonous increasing and decreasing. Converges to itself at the point of origin Where |x| = absolute value of the input. The soft sign usually using for regression computing problems. In figure 6 below soft-sign function can be seen. The equation 3.9 is a mathematical representation of the soft-sign function.
Unlike the single-layer system, the multi-layer system has hidden layers. Each hidden layers outputs are inputs the other hidden layers inputs. A single layer, multiple layers has input data shown by Xn and outputs data shown by Yn and between inputs and outputs multiple layer neural networks has hidden layer, the number of layers in the hidden layer can vary experimentally according to the method chosen and the most accurate learning.
Neurons in a layer are not related to each other and they perform the work of transferring the information that is in the system to the next layer or exit. Neurons in two layers in a row affect each other with different activation values and perform a transfer that determines the learning level of the mode figure 11 is shown multiple-layer figure.
Artificial Neural Network Models
There are two types of feed-forward and feed-back networks depending on the direction of the signal in the neural networks.
Feed Forward Neural Networks
In feed-forward ANN, cells are arranged in layers and the outputs of cells in one layer are given as input to the next layer via weights ANN, which is used to solve any problems, is as precise as the number of layers and the number of cells in the middle layer. Despite undetermined information, besides areas such as object recognition and signal processing, feed-forward ANN is also widely used in the diagnosis and control of systems. Figure 12 is shown that feed-forward neural networks.
Feedback Neural Networks
In the feedback ANN, at least one cell is output as input to itself or other cells, and usually, feedback is done through a delay element. Feed-back can be between cells in a layer as well as cells between layers. Figure 13 is shown that Feed-back neural networks.
Learning Rules in Artificial Neural Networks
Below are the information about Learning Rules in Artificial Neural Networks.
Error Correction Learning
It is the method used to train the error. With an algorithm such as the back-propagation algorithm, error values used to adjust the weights. If the system output is known to be y and home, the output of the desired system can be shown as the error (e) = k-y. Error correction learning algorithms try to minimize the error signal in each training repeat, by doing this by adjusting the weight values. The most popular learning algorithm for this learning is the Gradient descent algorithm.
Self (Unsupervised) Learning
In this learning style, only sample inputs are given. No sample output information is given. The system is expected to learn the relationships between the parameters in the examples by itself. This is the learning method used mostly for classification problems. According to the information given in the introduction, the network creates its own rules so that each sample is classified among themselves.
During the training, an input and a target output vector are given to the system in pairs, and the weight values in the system are updated and changed accordingly.
This learning rule is a close method with a consultant. Instead of a learning to give the target output, there is no output to the artificial neural network, but a criterion that evaluates the goodness of the output obtained against the given input is used. Boltzman Rule or Genetic Algorithms developed by Hinton and Sejnowski are examples of reinforced learning to solve optimization problems.
According to the learning time
Below is the information about the learning time.
The Artificial Neural Networks are trained with the training data and the structure of the network is recorded. The network always works with the same structure from now on. It does not change anything during its use.
After training of the Artificial Neural Networks training data, it continues to regulate itself during its use, thus obtaining a constantly learning ANN.
Artificial Neural Networks Advantages and Disadvantages
For detection ferroresonance in years researcher tried many difference method as power spectral density, current wavelet transform, short time Fourier transform and continuous wavelet transform. While these methods are basically determined by the statistical values of the frequencies, neural network method is a more dynamic determination method for nonlinear problems compared to these methods.
Artificial Neural Networks have obtained fame over alternative techniques, as it is a client in discovering relationships among large frames of data, to learn the certain status or operating condition of the objective schemes. Artificial Neural Networks have ability to work incomplete knowledge, has fault tolerance; corruption of one or higher cells of Artificial Neural Networks do not prohibit it from engendering output. This feature makes the networks fault tolerance.
On the other hand it is created as a result of efficient algorithm an experience that works. There is no specific formula, the duration of the network is unidentified for example for this thesis NN27 duration was forty one minute, NN18 duration was one hour forty minutes.
It should not be ignored that the disadvantages of Artificial Neural Networks, which are an ever-evolving science branch, are eliminated one by one and their advantages are widening day by day. This means that artificial neural networks will come an essential part of our lives increasingly relevant.