Traitement en cours

Veuillez attendre...

Paramétrages

Paramétrages

Aller à Demande

1. AU2001295591 - A method for supervised teaching of a recurrent artificial neural network

Office Australie
Numéro de la demande 2001295591
Date de la demande 05.10.2001
Numéro de publication 2001295591
Date de publication 20.12.2001
Type de publication A
CIB
G06N 3/08
GPHYSIQUE
06CALCUL; COMPTAGE
NSYSTÈMES DE CALCULATEURS BASÉS SUR DES MODÈLES DE CALCUL SPÉCIFIQUES
3Systèmes de calculateurs basés sur des modèles biologiques
02utilisant des modèles de réseaux neuronaux
08Méthodes d'apprentissage
G06N 3/04
GPHYSIQUE
06CALCUL; COMPTAGE
NSYSTÈMES DE CALCULATEURS BASÉS SUR DES MODÈLES DE CALCUL SPÉCIFIQUES
3Systèmes de calculateurs basés sur des modèles biologiques
02utilisant des modèles de réseaux neuronaux
04Architecture, p.ex. topologie d'interconnexion
Déposants Fraunhofer-Gesellschaft Zur Forderung Der Angewandten Forschung E.V.
Mandataires Davies Collison Cave Pty Ltd
Données relatives à la priorité 00122415 13.10.2000 EP
Titre
(EN) A method for supervised teaching of a recurrent artificial neural network
Abrégé
(EN)
A method for the supervised teaching of a recurrent neutral network (RNN) is disclosed. A typical embodiment of the method utilizes a large (50 units or more), randomly initialized RNN with a globally stable dynamics. During the training period, the output units of this RNN are teacher-forced to follow the desired output signal. During this period, activations from all hidden units are recorded. At the end of the teaching period, these recorded data are used as input for a method which computes new weights of those connections that feed into the output units. The method is distinguished from existing training methods for RNNs through the following characteristics: (1) Only the weights of connections to output units are changed by learning - existing methods for teaching recurrent networks adjust all network weights. (2) The internal dynamics of large networks are used as a 'reservoir' of dynamical components which are not changed, but only newly combined by the learning procedure - existing methods use small networks, whose internal dynamics are themselves competely re-shaped through learning.

Également publié en tant que
IN529/CHENP/2003