Processing

Please wait...

Settings

Settings

Goto Application

1. WO2019193149 - SUPPORT FOR GENERATION OF COMFORT NOISE, AND GENERATION OF COMFORT NOISE

Publication Number WO/2019/193149
Publication Date 10.10.2019
International Application No. PCT/EP2019/058629
International Filing Date 05.04.2019
IPC
G10L 19/012 2013.01
GPHYSICS
10MUSICAL INSTRUMENTS; ACOUSTICS
LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
19Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
012Comfort noise or silence coding
G10L 19/008 2013.01
GPHYSICS
10MUSICAL INSTRUMENTS; ACOUSTICS
LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
19Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
CPC
G10L 19/0017
GPHYSICS
10MUSICAL INSTRUMENTS; ACOUSTICS
LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
19Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
0017Lossless audio signal coding; Perfect reconstruction of coded audio signal by transmission of coding error
G10L 19/008
GPHYSICS
10MUSICAL INSTRUMENTS; ACOUSTICS
LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
19Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
G10L 19/012
GPHYSICS
10MUSICAL INSTRUMENTS; ACOUSTICS
LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
19Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
012Comfort noise or silence coding
G10L 19/03
GPHYSICS
10MUSICAL INSTRUMENTS; ACOUSTICS
LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
19Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
02using spectral analysis, e.g. transform vocoders or subband vocoders
03Spectral prediction for preventing pre-echo; Temporary noise shaping [TNS], e.g. in MPEG2 or MPEG4
G10L 19/04
GPHYSICS
10MUSICAL INSTRUMENTS; ACOUSTICS
LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
19Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
04using predictive techniques
G10L 19/24
GPHYSICS
10MUSICAL INSTRUMENTS; ACOUSTICS
LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
19Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
04using predictive techniques
16Vocoder architecture
18Vocoders using multiple modes
24Variable rate codecs, e.g. for generating different qualities using a scalable representation such as hierarchical encoding or layered encoding
Applicants
  • TELEFONAKTIEBOLAGET LM ERICSSON (PUBL) [SE]/[SE]
Inventors
  • NORVELL, Erik
  • JANSSON, Fredrik
Agents
  • ERICSSON
Priority Data
62/652,94105.04.2018US
62/652,94905.04.2018US
62/653,07805.04.2018US
Publication Language English (EN)
Filing Language English (EN)
Designated States
Title
(EN) SUPPORT FOR GENERATION OF COMFORT NOISE, AND GENERATION OF COMFORT NOISE
(FR) SUPPORT POUR LA GÉNÉRATION DE BRUIT DE CONFORT, ET GÉNÉRATION DE BRUIT DE CONFORT
Abstract
(EN)
A method for generation of comfort noise for at least two audio channels. The method comprises determining a spatial coherence between audio signals on the respective audio channels, wherein at least one spatial coherence value per frame and frequency band is determined to form a vector of spatial coherence values. A vector of predicted spatial coherence values is formed by a weighted combination of a first coherence prediction and a second coherence prediction that are combined using a weight factor a. The method comprises signaling information about the weight factor a to the receiving node, for enabling the generation of the comfort noise for the at least two audio channels at the receiving node.
(FR)
L'invention concerne un procédé de génération de bruit de confort pour au moins deux canaux audio. Le procédé comprend la détermination d'une cohérence spatiale entre des signaux audio sur les canaux audio respectifs, au moins une valeur de cohérence spatiale par trame et bande de fréquence étant déterminée pour former un vecteur de valeurs de cohérence spatiale. Un vecteur de valeurs de cohérence spatiale prédites est formé par une combinaison pondérée d'une première prédiction de cohérence et d'une seconde prédiction de cohérence qui sont combinées à l'aide d'un facteur de pondération a. Le procédé comprend la signalisation d'informations concernant le facteur de pondération a au nœud de réception, pour permettre la génération du bruit de confort pour les au moins deux canaux audio au niveau du nœud de réception.
Also published as
ZA2020/05926
TH2001005637
EP2019716874
Latest bibliographic data on file with the International Bureau