[ The Articial Neuron | The Net | Self-Organization | Simulanting the plasticity ]
Neural computing The neural computing was developed, originally, in the decade of 40, for the neurophisiologist McCulloch and for mathematical Walter Pitts, of the University of Illinois, the ones which, inside of the cybernetic spirit, they made an analogy between alive nervous cells and the electronic process in a work published on "formal neurons." The work consisted of a model of variable resistors and amplifiers representing connections of a biological neuron. Ever since, more emphatically starting from the decade of 80, several neural computing models have appeared with the purpose of improving and to apply this Technology. Some of these proposals tend to improve internal mechanisms of the neural computing for application in the industry and business, others try to make them even closer to the original biological models
The artificial neuron is a logical-mathematical structure that tries to simulate the form, the behavior and the functions of a biological neuron. So, the dendrites were substituted by entrances, whose connections with the artificial cellular body are accomplished through elements called weight (simulating synapses). The incentives captured by the entrances are processed by the sum function, and the threshold of shot of the biological neuron was substituted by the transfer function.
Combining several artificial neurons we can form what is called neural computing. The entrances, simulating an area of reception of incentives, they can be connected in many neurons, then resulting in a series of exits, where each neuron represents an exit. Those connections, in comparison to the biological system, represent the contact of the dendrites with other neurons, forming the synapses. The function of the connection in itself is to turn the signal of exit of a neuron in an entrance signal of another or even guide the exit signal to the external world (real world). The different possibilities of connections among the layers of neurons can generate n numbers of different structures.
Exemplo de uma Rede Neural Artificial de 2 camadas com 4 entradas e 2 saídas
It represents 007–Example of a Neural Computing of 2 layers with 4 entrances and 2 exits The variants of a neural computing are many, and combining them, we can change the architecture according to the need of the application or even according to the planner’s taste. Basically, the items that compose a neural computing and because of this, liable to modifications are the following ones: connections among layers intermediary layers amount of neurons transfer function learning algorithm
As variantes de uma rede neural são muitas, e combinando-as, podemos mudar a arquitetura conforme a necessidade da aplicação, ou ainda, conforme o gosto do projetista. Basicamente, os itens que compõem uma rede neural e, portanto, sujeito a modificações, são os seguintes :
The process of the cortical plasticity in a neural computing was implemented in a Kohonen’s type neural, which was chosen by possessing certain functional similarities with biological neural, such as the self-organization, fundamental process in the alive organic systems. The basic outline of the model of Kohonen is constituted by a not supervised training neural and just two layers. It is said that that net type possesses a topologic paradigm once the neural can present any bidimensional geometric shape in its exit layer, as hexagonal, rectangular, triangular and another. Illustration 008 - All entrances X0 ..XN are thoroughly connected with all exit neurons in the bidimensional map . After having chosen the neural and defined its architecture, follows a phase called training ,it means a phase whose task is "to workout" the neural with a collection of incentives (complex signs, voice, images, etc.) that wants the neural to recognize when in operation. In the training phase, the neurons of the exit layer challenge to be the winners to each new iteration of the training group. It means, whenever it is presented to the neural, any entrance, There is a competition among the neurons of the exit layer to represent the entrance presented in that moment. That learning, is nothing else than successive modifications in the weights of the neurons so that these classify the presented entrances. We say that the neural "learned" when it starts to recognize all entrances presented during the training phase. This is the way how it is translated the learning of the neural, because having at least a neuron that represents a certain information (an incentive presented in the entrance), whenever this incentive is presented to this neural, that neuron that it was trained to represent it, automatically will be discharged, informing which the incentive that was presented to the neural. We still remind that a fort characteristic of the neural is the capacity to recognize variations of the trained incentives. This means, for example, that presenting any incentive X, similar to an incentive Y that was part of the training group, there is a great probability that the incentive X is recognized as the trained incentive Y, thus revealing the capacity of generalization
Depois de escolhida a rede neural e definida a sua arquitetura, segue uma fase chamada de treinamento, ou seja, uma fase cuja tarefa é "treinar" a rede neural com uma coleção de estímulos (sinais complexos, voz, imagens, etc.) que se deseja que a rede reconheça quando em operação.
Na fase treinamento, os neurônios da camada de saída competem para serem os vencedores a cada nova iteração do conjunto de treinamento. Ou seja, sempre que é apresentada, à rede neural, uma entrada qualquer, existe uma competição entre os neurônios da camada de saída para representar a entrada apresentada naquele momento. Esse aprendizado, nada mais é do que modificações sucessivas nos pesos dos neurônios de forma que estes classifiquem as entradas apresentadas. Dizemos que a rede neural "aprendeu" quando ela passa a reconhecer todas as entradas apresentadas durante a fase de treinamento.
Assim é que se traduz o aprendizado da rede neural, pois, havendo pelo menos um neurônio que represente uma determinada informação (um estímulo apresentado na entrada), sempre que este estímulo for apresentado a esta rede neural, aquele neurônio que foi treinado para representá-lo, automaticamente irá ser disparado, informando assim, qual o estímulo que foi apresentado para a rede neural.
Lembramos ainda que, uma forte característica das redes neurais é a capacidade de reconhecer variações dos estímulos treinados. Isto significa, por exemplo, que apresentando um estímulo X qualquer, semelhante a um estímulo Y que fez parte do conjunto de treinamento, existe uma grande probabilidade de que o estímulo X seja reconhecido como o estímulo Y treinado, revelando assim a capacidade de generalização da rede neural artificial.
Em: Redes
Neurais Artificiais: Aprendizado e Plasticidade
Por: Malcon
Anderson Tafner
Em: Revista "Cérebro
& Mente" 2(5), Março/Maio 1998.
Copyright 1998 Universidade Estadual
de Campinas
Uma Realização: Núcleo
de Informática Biomédica