Download e-book Self-Organizing Maps

Free download. Book file PDF easily for everyone and every device. You can download and read online Self-Organizing Maps file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Self-Organizing Maps book. Happy reading Self-Organizing Maps Bookeveryone. Download file Free Book PDF Self-Organizing Maps at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Self-Organizing Maps Pocket Guide.
A self-organizing map (SOM) is a type of artificial neural network (ANN) that is trained using unsupervised learning to produce a.
Table of contents

We have a dedicated site for Germany.

The Self-Organizing Map SOM , with its variants, is the most popular artificial neural network algorithm in the unsupervised learning category. About research articles on it have appeared in the open literature, and many industrial projects use the SOM as a tool for solving hard real-world problems.

Many fields of science have adopted the SOM as a standard analytical tool: in statistics, signal processing, control theory, financial analyses, experimental physics, chemistry and medicine. The SOM solves difficult high-dimensional and nonlinear problems such as feature extraction and classification of images and acoustic patterns, adaptive control of robots, and equalization, demodulation, and error-tolerant transmission of signals in telecommunications.

A new area is organization of very large document collections. Like most artificial neural networks, SOMs operate in two modes: training and mapping. The visible part of a self-organizing map is the map space, which consists of components called nodes or neurons.

Submission history

The map space is defined beforehand, usually as a finite two-dimensional region where nodes are arranged in a regular hexagonal or rectangular grid. While nodes in the map space stay fixed, training consists in moving weight vectors toward the input data reducing a distance metric without spoiling the topology induced from the map space. Thus, the self-organizing map describes a mapping from a higher-dimensional input space to a lower-dimensional map space. Once trained, the map can classify a vector from the input space by finding the node with the closest smallest distance metric weight vector to the input space vector.

The goal of learning in the self-organizing map is to cause different parts of the network to respond similarly to certain input patterns. This is partly motivated by how visual, auditory or other sensory information is handled in separate parts of the cerebral cortex in the human brain. The weights of the neurons are initialized either to small random values or sampled evenly from the subspace spanned by the two largest principal component eigenvectors. With the latter alternative, learning is much faster because the initial weights already give a good approximation of SOM weights.


  1. Emily Posts Etiquette, 17th Edition (Thumb Indexed)?
  2. Prepare and Inspire: K-12 Science, Technology, Engineering, and Math (STEM) Education for America’s Future.
  3. Main navigation.

The network must be fed a large number of example vectors that represent, as close as possible, the kinds of vectors expected during mapping. The examples are usually administered several times as iterations. The training utilizes competitive learning. When a training example is fed to the network, its Euclidean distance to all weight vectors is computed. The neuron whose weight vector is most similar to the input is called the best matching unit BMU.

The magnitude of the change decreases with time and with the grid-distance from the BMU.

Self-Organizing Maps - MATLAB & Simulink

The update formula for a neuron v with weight vector W v s is. T-1, then repeat, T being the training sample's size , be randomly drawn from the data set bootstrap sampling , or implement some other sampling method such as jackknifing. In the simplest form, it is 1 for all neurons close enough to BMU and 0 for others, but a Gaussian function is a common choice, too. Regardless of the functional form, the neighborhood function shrinks with time.

When the neighborhood has shrunk to just a couple of neurons, the weights are converging to local estimates. The network winds up associating output nodes with groups or patterns in the input data set. If these patterns can be named, the names can be attached to the associated nodes in the trained net. During mapping, there will be one single winning neuron: the neuron whose weight vector lies closest to the input vector. This can be simply determined by calculating the Euclidean distance between input vector and weight vector.

While representing input data as vectors has been emphasized in this article, any kind of object which can be represented digitally, which has an appropriate distance measure associated with it, and in which the necessary operations for training are possible can be used to construct a self-organizing map. This includes matrices, continuous functions or even other self-organizing maps. Selection of a good initial approximation is a well-known problem for all iterative methods of learning neural networks. Kohonen [12] used random initiation of SOM weights.

Self Organizing Maps

Recently, principal component initialization, in which initial map weights are chosen from the space of the first principal components, has become popular due to the exact reproducibility of the results. Careful comparison of the random initiation approach to principal component initialization for one-dimensional SOM models of principal curves demonstrated that the advantages of principal component SOM initialization are not universal.

The best initialization method depends on the geometry of the specific dataset.

http://www.cantinesanpancrazio.it/components/dyhehot/152-programma-per-spiare.php Principal component initialization is preferable in dimension one if the principal curve approximating the dataset can be univalently and linearly projected on the first principal component quasilinear sets. For nonlinear datasets, however, random initiation performs better. Each weight vector is of the same dimension as the node's input vector.

The weights may initially be set to random values. Now we need input to feed the map. Colors can be represented by their red, green, and blue components. There are two ways to interpret a SOM. Because in the training phase weights of the whole neighborhood are moved in the same direction, similar items tend to excite adjacent neurons. Based on your location, we recommend that you select:. Select the China site in Chinese or English for best site performance.

Navigation menu

Other MathWorks country sites are not optimized for visits from your location. Toggle Main Navigation. All Examples Functions Apps More. Search MathWorks. Open Mobile Search. All Examples Functions Apps. Toggle navigation. Trial Software Product Updates.