Search International and National Patent Collections
Some content of this application is unavailable at the moment.
If this situation persists, please contact us atFeedback&Contact
1. (WO2017093146) METHOD AND APPARATUS FOR AUDIO OBJECT CODING BASED ON INFORMED SOURCE SEPARATION
Latest bibliographic data on file with the International Bureau

Pub. No.: WO/2017/093146 International Application No.: PCT/EP2016/078886
Publication Date: 08.06.2017 International Filing Date: 25.11.2016
IPC:
G10L 19/26 (2013.01) ,G10L 19/008 (2013.01)
G PHYSICS
10
MUSICAL INSTRUMENTS; ACOUSTICS
L
SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
19
Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
04
using predictive techniques
26
Pre-filtering or post-filtering
G PHYSICS
10
MUSICAL INSTRUMENTS; ACOUSTICS
L
SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
19
Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
04
using predictive techniques
08
Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
Applicants:
THOMSON LICENSING [FR/FR]; 1-5, rue Jeanne d'Arc 92130 Issy-les-Moulineaux, FR
Inventors:
DUONG, Quang Khanh Ngoc; FR
OZEROV, Alexey; FR
Agent:
TARQUIS-GUILLOU, Anne; FR
ROLLAND, Sophie; FR
MORAIN, David; FR
AUMONIER, Sébastien; FR
LABELLE, Lilian; FR
MERLET, Hugues; FR
LORETTE, Anne; FR
HUCHET, Anne; FR
PERROT, Sébastien; FR
LE DANTEC, Claude; FR
Priority Data:
15306899.401.12.2015EP
Title (EN) METHOD AND APPARATUS FOR AUDIO OBJECT CODING BASED ON INFORMED SOURCE SEPARATION
(FR) PROCÉDÉ ET APPAREIL POUR CODAGE D'OBJET AUDIO EN FONCTION DE SÉPARATION DE SOURCE INFORMÉE
Abstract:
(EN) To represent and recover the constituent sources present in an audio mixture, informed source separation techniques are used. In particular, a universal spectral model (USM) is used to obtain a sparse time activation matrix for an individual audio source in the audio mixture. The indices of non-zero groups in the time activation matrix are encoded as the side information into a bitstream. The non-zero coefficients of the time activation matrix may also be encoded into the bitstream. At the decoder side, when the coefficients of the time activation matrix are included in the bitstream, the matrix can be decoded from the bitstream. Otherwise, the time activation matrix can be estimated from the audio mixture, the non-zero indices included in the bitstream, and the USM model. Given the time activation matrix, the constituent audio sources can be recovered based on the audio mixture and the USM model.
(FR) L'invention concerne, pour représenter et récupérer les sources constituantes présentes dans un mixage audio, des techniques de séparation de source informée. En particulier, un modèle spectral universel (USM) est utilisé pour obtenir une matrice éparse d'activation temporelle pour une source audio individuelle dans le mixage audio. Les indices de groupes non-nuls dans la matrice d'activation temporelle sont codés sous forme d'informations annexes dans un train de bits. Les coefficients non-nuls de la matrice d'activation temporelle peuvent également être codés dans le train de bits. Du côté décodeur, lorsque les coefficients de la matrice d'activation temporelle sont inclus dans le train de bits, la matrice peut être décodée à partir du train de bits. Sinon, la matrice d'activation temporelle peut être estimée à partir du mixage audio, des indices non-nuls inclus dans le train de bits et du modèle USM. Compte tenu de la matrice d'activation temporelle, les sources audio constituantes peuvent être récupérées sur la base du mixage audio et du modèle USM.
front page image
Designated States: AE, AG, AL, AM, AO, AT, AU, AZ, BA, BB, BG, BH, BN, BR, BW, BY, BZ, CA, CH, CL, CN, CO, CR, CU, CZ, DE, DJ, DK, DM, DO, DZ, EC, EE, EG, ES, FI, GB, GD, GE, GH, GM, GT, HN, HR, HU, ID, IL, IN, IR, IS, JP, KE, KG, KN, KP, KR, KW, KZ, LA, LC, LK, LR, LS, LU, LY, MA, MD, ME, MG, MK, MN, MW, MX, MY, MZ, NA, NG, NI, NO, NZ, OM, PA, PE, PG, PH, PL, PT, QA, RO, RS, RU, RW, SA, SC, SD, SE, SG, SK, SL, SM, ST, SV, SY, TH, TJ, TM, TN, TR, TT, TZ, UA, UG, US, UZ, VC, VN, ZA, ZM, ZW
African Regional Intellectual Property Organization (ARIPO) (BW, GH, GM, KE, LR, LS, MW, MZ, NA, RW, SD, SL, ST, SZ, TZ, UG, ZM, ZW)
Eurasian Patent Organization (AM, AZ, BY, KG, KZ, RU, TJ, TM)
European Patent Office (AL, AT, BE, BG, CH, CY, CZ, DE, DK, EE, ES, FI, FR, GB, GR, HR, HU, IE, IS, IT, LT, LU, LV, MC, MK, MT, NL, NO, PL, PT, RO, RS, SE, SI, SK, SM, TR)
African Intellectual Property Organization (BF, BJ, CF, CG, CI, CM, GA, GN, GQ, GW, KM, ML, MR, NE, SN, TD, TG)
Publication Language: English (EN)
Filing Language: English (EN)
Also published as:
CN108431891EP3384492US20180358025BR112018011005