3.2 Mapping

3.2.3 Self-organizing maps

Self-organizing maps [Kohonen, 1995][Müller and Wyler, 1994][Blayo, 1995] are inspired from biology. They are designed to behave, for example, like the somatotopic map of the motor nerves and the tonotopic map of the auditory region. The self-organizing map algorithm, first introduced by Kohonen, is an unsupervised (self-organizing) neural network composed of an input layer and a competitive neural layer. For our goal, the most interesting property of this network is that the feature map preserves the topology of stimuli according to their similarity.

We can use self-organizing maps to lower the dimensionality and preserve the topological features of the data. To complete this task, we present to the input layer the distance vectors, i.e. the coordinates of each resource as a point in an -dimensional space,
.

The neurons in the competitive layer are in fact the pixels and a weight (reference) vector is associated with them:

with .

The self-organizing map algorithm in the learning process stage[7] can be summarized as follow:

  1. Initialization of the reference vectors;

  2. Presentation of a vector to the input layer;

  3. Detection of the neuron having the closest reference vector;

  4. Modification of the reference vectors of the neurons surrounding the winner;

  5. Repetition of steps 2-5 until the number of required iterations has been reached.

According to the Euclidean distance[8] between an input vector and a weight vector of the neuron :

a winning neuron is defined as the one which has the closest weight vector:

During the learning period, a neighborhood function is defined to activate neurons that are topographically close to the winning neuron. This function is usually

where is the initial neighborhood (radius)[9].

The reference vectors of the neurons surrounding the winner are modified as follows:

with a monotonically decreasing function, for example:

being the initial learning rate[10] and the number of training operations.[11]

Note that self-organizing maps are performed in a way similar to the k-means [Belaïd and Belaïd, 1992] algorithm used in statistics. Although, the latter has been shown to perform differently and less satisfactory than the first [Ultsch, 1995]

It must be noted that no proof of convergence of the self-organizing maps algorithm, except for the one-dimensional case, has yet been presented [Kohonen, 1995]. Although, it is important to evaluate the complexity of the algorithm. Since the convergence has not be formally proved, we must rely on empirical experiments to determine . Thus, for , it has been shown that the complexity of the algorithm is of the order [Wyler, 1994].

Having completed the learning process, the mapping can be processed by calculating for each input distance vector the winning neuron. We have then a mapping function which for each resource returns a pixel :
.


[7] The learning process usually consists of two stages: a coarse-adjustment pass and a fine-adjustment pass.
[8] Note that to respect our model, the so-called city-block (Manhattan) distance should be used. Although, the Euclidean distances gives better visual effects in a two-dimensional space. Therefore, to fully respect our model, the distance should be defined by
.
[9] This is usually about half the diameter of the network during the coarse-adjustment pass and substantially during the fine-adjustment pass.
[10] Typically 0.5 during the coarse-adjustment pass and 0.02 during the fine-adjustment pass.
[11] For statistical accuracy, should be at least 500 during the final convergence phase.
Cyberspace geography visualization - 15 October 1995


Luc Girardin, The Graduate Institute of International Studies