To properly display this page you need a browser with JavaScript support.
Index
 PSGMiner Forms Machine learning Artificial Neural Network * Radial Basis Function Network *

# Radial Basis Function Network *

In the field of mathematical modeling, a radial basis function network is an artificial neural network that uses radial basis functions as activation functions. The output of the network is a linear combination of radial basis functions of the inputs and neuron parameters. Radial basis function networks have many uses, including function approximation, time series prediction, classification, and system control. They were first formulated in a 1988 paper by Broomhead and Lowe, both researchers at the Royal Signals and Radar Establishment.

Network architecture

Radial basis function (RBF) networks typically have three layers: an input layer, a hidden layer with a non-linear RBF activation function and a linear output layer. The input can be modeled as a vector of real numbers . The output of the network is then a scalar function of the input vector, , and is given by where N is the number of neurons in the hidden layer,  ci is the center vector for neuron i, and ai is the weight of neuron i in the linear output neuron. Functions that depend only on the distance from a center vector are radially symmetric about that vector, hence the name radial basis function. In the basic form all inputs are connected to each hidden neuron. The norm is typically taken to be the Euclidean distance (although the Mahalanobis distance appears to perform better in general[citation needed]) and the radial basis function is commonly taken to be Gaussian The Gaussian basis functions are local to the center vector in the sense that i.e. changing parameters of one neuron has only a small effect for input values that are far away from the center of that neuron.

Given certain mild conditions on the shape of the activation function, RBF networks are universal approximators on a compact subset of .This means that an RBF network with enough hidden neurons can approximate any continuous function with arbitrary precision.

The parameters   ai ,   Ci , and  βi  are determined in a manner that optimizes the fit between  φ  and the data.

Normalized

Normalized architecture

In addition to the above unnormalized architecture, RBF networks can be normalized. In this case the mapping is where is known as a "normalized radial basis function".

Theoretical motivation for normalization

There is theoretical justification for this architecture in the case of stochastic data flow. Assume a stochastic kernel approximation for the joint probability density where the weights Ci  and  ei  are exemplars from the data and we require the kernels to be normalized and The probability densities in the input and output spaces are and

The expectation of y given an input    X  is where is the conditional probability of y given  X   . The conditional probability is related to the joint probability through Bayes theorem which yields This becomes when the integrations are performed.

Local linear models

It is sometimes convenient to expand the architecture to include local linear models. In that case the architectures become, to first order, and in the unnormalized and normalized cases, respectively. Here bi  are weights to be determined. Higher order linear terms are also possible.

This result can be written where and in the unnormalized case and in the normalized case.

Here is a Kronecker delta function defined as  Training

RBF networks are typically trained by a two-step algorithm. In the first step, the center vectors Ci of the RBF functions in the hidden layer are chosen. This step can be performed in several ways; centers can be randomly sampled from some set of examples, or they can be determined using k-means clustering. Note that this step is unsupervised. A third backpropagation step can be performed to fine-tune all of the RBF net's parameters.

The second step simply fits a linear model with coefficients wi to the hidden layer's outputs with respect to some objective function. A common objective function, at least for regression/function estimation, is the least squares function: where We have explicitly included the dependence on the weights. Minimization of the least squares objective function by optimal choice of weights optimizes accuracy of fit.

There are occasions in which multiple objectives, such as smoothness as well as accuracy, must be optimized. In that case it is useful to optimize a regularized objective function such as where and where optimization of S maximizes smoothness and is known as a regularization parameter.

Interpolation

RBF networks can be used to interpolate a function when the values of that function are known on finite number of points: , N. Taking the known points Xi  to be the centers of the radial basis functions and evaluating the values of the basis functions at the same points the weights can be solved from the equation It can be shown that the interpolation matrix in the above equation is non-singular, if the points Xi are distinct, and thus the weights w can be solved by simple linear algebra: Function approximation

If the purpose is not to perform strict interpolation but instead more general function approximation or classification the optimization is somewhat more complex because there is no obvious choice for the centers. The training is typically done in two phases first fixing the width and centers and then the weights. This can be justified by considering the different nature of the non-linear hidden neurons versus the linear output neuron.

Training the basis function centers

Basis function centers can be randomly sampled among the input instances or obtained by Orthogonal Least Square Learning Algorithm or found by clustering the samples and choosing the cluster means as the centers.

The RBF widths are usually all fixed to same value which is proportional to the maximum distance between the chosen centers.

Pseudoinverse solution for the linear weights

After the centers Ci have been fixed, the weights that minimize the error at the output are computed with a linear pseudoinverse solution: where the entries of G are the values of the radial basis functions evaluated at the points .

The existence of this linear solution means that unlike multi-layer perceptron (MLP) networks, RBF networks have a unique local minimum (when the centers are fixed).

Gradient descent training of the linear weights

Another possible training algorithm is gradient descent. In gradient descent training, the weights are adjusted at each time step by moving them in a direction opposite from the gradient of the objective function (thus allowing the minimum of the objective function to be found), where is a "learning parameter."

For the case of training the linear weights,  ai , the algorithm becomes in the unnormalized case and in the normalized case. Projection operator training of the linear weights

For the case of training the linear weights,  ai  and , the algorithm becomes in the unnormalized case and in the normalized case and References

1. Broomhead, D. S.; Lowe, David (1988). Radial basis functions, multi-variable functional interpolation and adaptive networks (Technical report). RSRE. 4148.
2. Broomhead, D. S.; Lowe, David (1988). "Multivariable functional interpolation and adaptive networks". Complex Systems 2: 321–355.
3. Schwenker, Friedhelm; Kestler, Hans A.; Palm, Günther (2001). "Three learning phases for radial-basis-function networks". Neural Networks 14: 439–458. doi:10.1016/s0893-6080(01)00027-2. CiteSeerX: 10.1.1.109.312
4. Park, J.; I. W. Sandberg (Summer 1991). "Universal Approximation Using Radial-Basis-Function Networks". Neural Computation 3 (2): 246–257. doi:10.1162/neco.1991.3.2.246. Retrieved 26 March 2013.
• . Moody and C. J. Darken, "Fast learning in networks of locally tuned processing units," Neural Computation, 1, 281-294 (1989). Also see Radial basis function networks according to Moody and Darken
• T. Poggio and F. Girosi, "Networks for approximation and learning," Proc. IEEE 78(9), 1484-1487 (1990).
• Roger D. Jones, Y. C. Lee, C. W. Barnes, G. W. Flake, K. Lee, P. S. Lewis, and S. Qian, ?Function approximation and time series prediction with neural networks,? Proceedings of the International Joint Conference on Neural Networks, June 17–21, p. I-649 (1990).
• Martin D. Buhmann (2003). Radial Basis Functions: Theory and Implementations. Cambridge University. ISBN 0-521-63338-9.
• Yee, Paul V. and Haykin, Simon (2001). Regularized Radial Basis Function Networks: Theory and Applications. John Wiley. ISBN 0-471-35349-3.
• John R. Davies, Stephen V. Coggeshall, Roger D. Jones, and Daniel Schutzer, "Intelligent Security Systems," in Freedman, Roy S., Flein, Robert A., and Lederman, Jess, Editors (1995). Artificial Intelligence in the Capital Markets. Chicago: Irwin. ISBN 1-55738-811-3.
• Simon Haykin (1999). Neural Networks: A Comprehensive Foundation (2nd edition ed.). Upper Saddle River, NJ: Prentice Hall. ISBN 0-13-908385-5.
• S. Chen, C. F. N. Cowan, and P. M. Grant, "Orthogonal Least Squares Learning Algorithm for Radial Basis Function Networks", IEEE Transactions on Neural Networks, Vol 2, No 2 (Mar) 1991.

* https://en.wikipedia.org/