Processing

Please wait...

Settings

Settings

Goto Application

1. WO2020108472 - CODING METHOD AND APPARATUS, DECODING METHOD AND APPARATUS

Publication Number WO/2020/108472
Publication Date 04.06.2020
International Application No. PCT/CN2019/120898
International Filing Date 26.11.2019
IPC
H03M 13/13 2006.01
HELECTRICITY
03BASIC ELECTRONIC CIRCUITRY
MCODING, DECODING OR CODE CONVERSION, IN GENERAL
13Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
05using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
13Linear codes
H04L 25/02 2006.01
HELECTRICITY
04ELECTRIC COMMUNICATION TECHNIQUE
LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
25Baseband systems
02Details
G06N 3/04 2006.01
GPHYSICS
06COMPUTING; CALCULATING OR COUNTING
NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
3Computer systems based on biological models
02using neural network models
04Architecture, e.g. interconnection topology
G06N 3/08 2006.01
GPHYSICS
06COMPUTING; CALCULATING OR COUNTING
NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
3Computer systems based on biological models
02using neural network models
08Learning methods
CPC
G06N 3/04
GPHYSICS
06COMPUTING; CALCULATING; COUNTING
NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
3Computer systems based on biological models
02using neural network models
04Architectures, e.g. interconnection topology
G06N 3/08
GPHYSICS
06COMPUTING; CALCULATING; COUNTING
NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
3Computer systems based on biological models
02using neural network models
08Learning methods
H03M 13/13
HELECTRICITY
03BASIC ELECTRONIC CIRCUITRY
MCODING; DECODING; CODE CONVERSION IN GENERAL
13Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
05using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
13Linear codes
H04L 25/02
HELECTRICITY
04ELECTRIC COMMUNICATION TECHNIQUE
LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
25Baseband systems
02Details
Applicants
  • 华为技术有限公司 HUAWEI TECHNOLOGIES CO., LTD. [CN]/[CN]
Inventors
  • 徐晨 XU, Chen
  • 李榕 LI, Rong
  • 于天航 YU, Tianhang
  • 乔云飞 QIAO, Yunfei
  • 杜颖钢 DU, Yinggang
  • 黄凌晨 HUANG, Lingchen
  • 王俊 WANG, Jun
Priority Data
201811428115.527.11.2018CN
Publication Language Chinese (ZH)
Filing Language Chinese (ZH)
Designated States
Title
(EN) CODING METHOD AND APPARATUS, DECODING METHOD AND APPARATUS
(FR) PROCÉDÉ ET APPAREIL DE CODAGE, PROCÉDÉ ET APPAREIL DE DÉCODAGE
(ZH) 编码方法、译码方法及装置
Abstract
(EN)
The embodiments of the present application relate to the field of communications, and provide a coding method and apparatus, and a decoding method and apparatus. In the methods, on the basis of a kernel matrix, corresponding neural network units may be generated, and then the neural network units are formed into a coding neural network or a decoding neural network, so that the coding neural network or the decoding neural network is obtained after small neural network units are connected. Therefore, in a learning process of coding/decoding, generalization to the entire codeword space can be implemented by means of small learning samples, and the impact of information having relatively long codewords on the complexity and learning difficulty of the neural network is weakened.
(FR)
Les modes de réalisation de la présente invention se rapportent au domaine technique des communications, et concernent un procédé et un appareil de codage, et un procédé et un appareil de décodage. Dans les procédés, sur la base d'une matrice de noyau, des unités de réseau neuronal correspondantes peuvent être générées, puis les unités de réseau neuronal sont formées dans un réseau neuronal de codage ou un réseau neuronal de décodage, de sorte que le réseau neuronal de codage ou le réseau neuronal de décodage est obtenu après que de petites unités de réseau neuronal sont connectées. Par conséquent, dans un processus d'apprentissage de codage/décodage, la généralisation à l'ensemble de l'espace de mot de code peut être mise en œuvre au moyen de petits échantillons d'apprentissage, et l'impact d'informations ayant des mots de code relativement longs sur la complexité et la difficulté d'apprentissage du réseau neuronal est affaibli.
(ZH)
本申请实施例提供了一种编码、译码方法及装置,涉及通信领域,该方法可基于核矩阵生成对应的神经网络单元,再将神经网络单元组成编码神经网络或译码神经网络,实现了将小的神经网络单元进行连接后,得到编码神经网络或译码神经网络,从而在编码/译码的学习过程中,能够通过小的学习样本,泛化至整个码字空间,而弱化具有较长码字的信息对神经网络的复杂度以及学习难度的影响。
Also published as
Latest bibliographic data on file with the International Bureau