Processing

Please wait...

Settings

Settings

Goto Application

1. WO2017048730 - SYSTEMS AND METHODS FOR IDENTIFYING HUMAN EMOTIONS AND/OR MENTAL HEALTH STATES BASED ON ANALYSES OF AUDIO INPUTS AND/OR BEHAVIORAL DATA COLLECTED FROM COMPUTING DEVICES

Publication Number WO/2017/048730
Publication Date 23.03.2017
International Application No. PCT/US2016/051549
International Filing Date 13.09.2016
IPC
G10L 25/63 2013.1
GPHYSICS
10MUSICAL INSTRUMENTS; ACOUSTICS
LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
25Speech or voice analysis techniques not restricted to a single one of groups G10L15/-G10L21/129
48specially adapted for particular use
51for comparison or discrimination
63for estimating an emotional state
G06F 1/16 2006.1
GPHYSICS
06COMPUTING; CALCULATING OR COUNTING
FELECTRIC DIGITAL DATA PROCESSING
1Details not covered by groups G06F3/-G06F13/82
16Constructional details or arrangements
G10L 19/04 2013.1
GPHYSICS
10MUSICAL INSTRUMENTS; ACOUSTICS
LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
19Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
04using predictive techniques
G10L 21/003 2013.1
GPHYSICS
10MUSICAL INSTRUMENTS; ACOUSTICS
LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
21Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
003Changing voice quality, e.g. pitch or formants
G10L 25/66 2013.1
GPHYSICS
10MUSICAL INSTRUMENTS; ACOUSTICS
LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
25Speech or voice analysis techniques not restricted to a single one of groups G10L15/-G10L21/129
48specially adapted for particular use
51for comparison or discrimination
66for extracting parameters related to health condition
H04M 1/725 2006.1
HELECTRICITY
04ELECTRIC COMMUNICATION TECHNIQUE
MTELEPHONIC COMMUNICATION
1Substation equipment, e.g. for use by subscribers
72Substation extension arrangements; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selecting
725Cordless telephones
CPC
G06F 17/00
GPHYSICS
06COMPUTING; CALCULATING; COUNTING
FELECTRIC DIGITAL DATA PROCESSING
17Digital computing or data processing equipment or methods, specially adapted for specific functions
G06F 40/20
GPHYSICS
06COMPUTING; CALCULATING; COUNTING
FELECTRIC DIGITAL DATA PROCESSING
40Handling natural language data
20Natural language analysis
G06Q 30/0269
GPHYSICS
06COMPUTING; CALCULATING; COUNTING
QDATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
30Commerce, e.g. shopping or e-commerce
02Marketing, e.g. market research and analysis, surveying, promotions, advertising, buyer profiling, customer management or rewards; Price estimation or determination
0241Advertisement
0251Targeted advertisement
0269based on user profile or attribute
G10L 15/02
GPHYSICS
10MUSICAL INSTRUMENTS; ACOUSTICS
LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
15Speech recognition
02Feature extraction for speech recognition; Selection of recognition unit
G10L 15/187
GPHYSICS
10MUSICAL INSTRUMENTS; ACOUSTICS
LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
15Speech recognition
08Speech classification or search
18using natural language modelling
183using context dependencies, e.g. language models
187Phonemic context, e.g. pronunciation rules, phonotactical constraints or phoneme n-grams
G10L 15/28
GPHYSICS
10MUSICAL INSTRUMENTS; ACOUSTICS
LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
15Speech recognition
28Constructional details of speech recognition systems
Applicants
  • COGITO CORPORATION [US]/[US]
Inventors
  • FEAST, Joshua
  • AZARBAYEJANI, Ali
  • PLACE, Skyler
Agents
  • AUGST, Alexander D.
  • HAULBROOK, William R.
  • MONROE, Margo R.
  • PELLIGRINO, Jeffrey S.
  • SUH, Su Kyung
  • BUTEAU, Kristen C.
  • CAHILL, John J.
  • JARRELL, Brenda H.
  • LI, Xiaodong
  • LYON, Charles E.
  • MEDINA, Rolando
  • NGUYEN, Suzanne P.
  • NIHAN, Danielle M.
  • PACE, Nicholas J.
  • PYSHER, Paul A.
  • REARICK, John P.
  • REESE, Brian E.
  • ROHLFS, Elizabeth M.
  • SAHR, Robert N.
  • SCHONEWALD, Stephanie L.
  • SHAIKH, Nishat A.
  • SHINALL, Michael A.
  • VETTER, Michael L.
  • VRABLIK, Tracy L.
  • WANG, Gang
Priority Data
62/218,49014.09.2015US
62/218,49414.09.2015US
Publication Language English (en)
Filing Language English (EN)
Designated States
Title
(EN) SYSTEMS AND METHODS FOR IDENTIFYING HUMAN EMOTIONS AND/OR MENTAL HEALTH STATES BASED ON ANALYSES OF AUDIO INPUTS AND/OR BEHAVIORAL DATA COLLECTED FROM COMPUTING DEVICES
(FR) SYSTÈMES ET PROCÉDÉS POUR IDENTIFIER DES ÉMOTIONS HUMAINES ET/OU DES ÉTATS DE SANTÉ MENTALE SUR LA BASE D'ANALYSES D’ENTRÉES AUDIO ET/OU DE DONNÉES COMPORTEMENTALES COLLECTÉES À PARTIR DE DISPOSITIFS INFORMATIQUES
Abstract
(EN) Systems and methods are provided for analyzing voice-based audio inputs. A voicebased audio input associated with a user (e.g., wherein the voice- based audio input is a prompt or a command) is received and measures of one or more features are extracted. One or more parameters are calculated based on the measures of the one or more features. The occurrence of one or more mistriggers is identified by inputting the one or more parameters into a predictive model. Further, systems and methods are provided for identifying human mental health states using mobile device data. Mobile device data (including sensor data) associated with a mobile device corresponding to a user is received. Measurements are derived from the mobile device data and input into a predictive model. The predictive model is executed and outputs probability values of one or more symptoms associated with the user.
(FR) L'invention concerne des systèmes et des procédés pour analyser des entrées audio basées sur une voix. Une entrée audio basée sur une voix, associée à un utilisateur (par exemple, l'entrée audio basée sur une voix étant une invite ou une instruction), est reçue et des mesures d'une ou plusieurs caractéristiques sont extraites. Un ou plusieurs paramètres sont calculés sur la base des mesures d'une ou plusieurs caractéristiques. La survenue d'un ou plusieurs déclenchements intempestifs est identifiée par l'entrée d'un ou plusieurs paramètres dans un modèle prédictif. En outre, l'invention concerne des systèmes et des procédés pour identifier des états de santé mentale humaine à l'aide de données de dispositif mobile. Des données de dispositif mobile (y compris des données de capteur), associées à un dispositif mobile correspondant à un utilisateur, sont reçues. Des mesures sont obtenues à partir des données de dispositif mobile et entrées dans un modèle prédictif. Le modèle prédictif est exécuté et délivre des valeurs de probabilité d'un ou plusieurs symptômes associés à l'utilisateur.
Related patent documents
Latest bibliographic data on file with the International Bureau