Processing

Please wait...

Settings

Settings

Goto Application

Offices all Languages en Stemming true Single Family Member false Include NPL false
RSS feed can only be generated if you have a WIPO account

Save query

A private query is only visible to you when you are logged-in and can not be used in RSS feeds

Query Tree

Refine Options

Offices
All
Specify the language of your search keywords
Stemming reduces inflected words to their stem or root form.
For example the words fishing, fished,fish, and fisher are reduced to the root word,fish,
so a search for fisher returns all the different variations
Returns only one member of a family of patents
Include Non-Patent literature in results

Full Query

AIapplicationfieldPersonalDevicesComputingAndHciAffectiveComputing

Side-by-side view shortcuts

General
Go to Search input
CTRL + SHIFT +
Go to Results (selected record)
CTRL + SHIFT +
Go to Detail (selected tab)
CTRL + SHIFT +
Go to Next page
CTRL +
Go to Previous page
CTRL +
Results (First, do 'Go to Results')
Go to Next record / image
/
Go to Previous record / image
/
Scroll Up
Page Up
Scroll Down
Page Down
Scroll to Top
CTRL + Home
Scroll to Bottom
CTRL + End
Detail (First, do 'Go to Detail')
Go to Next tab
Go to Previous tab

Analysis

1.20120290521Discovering and classifying situations that influence affective response
US 15.11.2012
Int.Class G06F 17/00
GPHYSICS
06COMPUTING; CALCULATING OR COUNTING
FELECTRIC DIGITAL DATA PROCESSING
17Digital computing or data processing equipment or methods, specially adapted for specific functions
Appl.No 13168973 Applicant Frank Ari M. Inventor Frank Ari M.

Described herein are systems for identifying situations. The system receive samples, each comprising a temporal window of token instances to which a user was exposed and an affective response annotation. One embodiment uses a clustering algorithm to cluster the samples into a plurality of clusters utilizing a distance function that computes a distance between a pair comprising first and second samples. Another embodiment utilizes an Expectation-Maximization approach to assign situation identifiers. And another embodiment involves training, utilizing the samples, a machine learning-based classifier to assign situation identifiers.

2.20120290516Methods for discovering and classifying situations that influence affective response
US 15.11.2012
Int.Class G06N 99/00
GPHYSICS
06COMPUTING; CALCULATING OR COUNTING
NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
99Subject matter not provided for in other groups of this subclass
Appl.No 13168968 Applicant Ari M. Frank Inventor Ari M. Frank

Described herein are methods for identifying situations. The methods receive samples, each comprising a temporal window of token instances to which a user was exposed and an affective response annotation. One embodiment clusters the samples into a plurality of clusters utilizing a distance function that computes a distance between a pair comprising first and second samples. Another embodiment utilizes an Expectation-Maximization approach to assign situation identifiers. And still another embodiment involves training, utilizing the samples, a machine learning-based classifier to assign situation identifiers.

3.20230320642SYSTEMS AND METHODS FOR TECHNIQUES TO PROCESS, ANALYZE AND MODEL INTERACTIVE VERBAL DATA FOR MULTIPLE INDIVIDUALS
US 12.10.2023
Int.Class A61B 5/16
AHUMAN NECESSITIES
61MEDICAL OR VETERINARY SCIENCE; HYGIENE
BDIAGNOSIS; SURGERY; IDENTIFICATION
5Measuring for diagnostic purposes ; Identification of persons
16Devices for psychotechnics; Testing reaction times
Appl.No 18130947 Applicant The Trustees of Columbia University in the City of New York Inventor Baihan Lin

Disclosed are methods, systems, and other implementations for processing, analyzing, and modelling psychotherapy data. The implementations include a method for analyzing psychotherapy data that includes obtaining transcript data representative of spoken dialog in one or more psychotherapy sessions conducted between a patient and a therapist, extracting speech segments from the transcript data related to one or more of the patient or the therapist, applying a trained machine learning topic model process to the extracted speech segments to determine weighted topic labels representative of semantic psychiatric content of the extracted speech segments, and processing the weighted topic labels to derive a psychiatric assessment for the patient.

4.3937170SPEECH ANALYSIS FOR MONITORING OR DIAGNOSIS OF A HEALTH CONDITION
EP 12.01.2022
Int.Class G10L 25/66
GPHYSICS
10MUSICAL INSTRUMENTS; ACOUSTICS
LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
25Speech or voice analysis techniques not restricted to a single one of groups G10L15/-G10L21/129
48specially adapted for particular use
51for comparison or discrimination
66for extracting parameters related to health condition
Appl.No 20185364 Applicant NOVOIC LTD Inventor WESTON JACK
The invention relates to a computer-implemented method of training a machine learning model for performing speech analysis for monitoring or diagnosis of a health condition. The method uses training data comprising audio speech data and comprises obtaining one or more linguistic representations that each encode a sub-word, word, or multiple word sequence, of the audio speech data; obtaining one or more audio representations that each encode audio content of a segment of the audio speech data; combining the linguistic representations and audio representations into an input sequence comprising: linguistic representations of a sequence of one or more words or sub-words of the audio speech data; and audio representations of segments of the audio speech data, where the segments together contain the sequence of the one or more words or sub-words. The method further includes training a machine learning model using unsupervised learning to map the input sequence to a target output to learn combined audio-linguistic representations of the audio speech data for use in speech analysis for monitoring or diagnosis of a health condition.
5.20210397824Sentiment analysis of content using expression recognition
US 23.12.2021
Int.Class G06V 40/16
GPHYSICS
06COMPUTING; CALCULATING OR COUNTING
VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
40Recognition of biometric, human-related or animal-related patterns in image or video data
10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
16Human faces, e.g. facial parts, sketches or expressions
Appl.No 16907247 Applicant Actimize LTD. Inventor Vaibhav Mishra

A computerized method for providing a sentiment score by evaluating expressions of participants during a video meeting is provided herein. The computerized method comprising: a Sentiment Analysis (SA) module. The SA module is: (i) retrieving one or more recordings of a video meeting from the database of video meeting recordings of each participant in the video meeting and associating the one or more recordings with a participant; (ii) dividing each retrieved recording into segments; (iii) processing the segments in a Facial Expression Recognition (FER) system to associate each segment with a timestamped sequence of expressions for each participant in the video meeting; and (iv) processing each segment in an Artificial Neural Network (ANN) having a dense layer, by applying a prebuilt and pretrained deep learning model, to yield a sentiment score for each statement for each participant.

6.WO/2022/008739SPEECH ANALYSIS FOR MONITORING OR DIAGNOSIS OF A HEALTH CONDITION
WO 13.01.2022
Int.Class G10L 25/66
GPHYSICS
10MUSICAL INSTRUMENTS; ACOUSTICS
LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
25Speech or voice analysis techniques not restricted to a single one of groups G10L15/-G10L21/129
48specially adapted for particular use
51for comparison or discrimination
66for extracting parameters related to health condition
Appl.No PCT/EP2021/069221 Applicant NOVOIC LTD. Inventor WESTON, Jack
The invention relates to a computer-implemented method of training a machine learning model for performing speech analysis for monitoring or diagnosis of a health condition. The method uses training data comprising audio speech data and comprises obtaining one or more linguistic representations that each encode a sub-word, word, or multiple word sequence, of the audio speech data; obtaining one or more audio representations that each encode audio content of a segment of the audio speech data; combining the linguistic representations and audio representations into an input sequence comprising: linguistic representations of a sequence of one or more words or sub-words of the audio speech data; and audio representations of segments of the audio speech data, where the segments together contain the sequence of the one or more words or sub-words. The method further includes training a machine learning model using unsupervised learning to map the input sequence to a target output to learn combined audio- linguistic representations of the audio speech data for use in speech analysis for monitoring or diagnosis of a health condition.
7.20220358789Sentiment analysis of content using expression recognition
US 10.11.2022
Int.Class G06V 40/16
GPHYSICS
06COMPUTING; CALCULATING OR COUNTING
VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
40Recognition of biometric, human-related or animal-related patterns in image or video data
10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
16Human faces, e.g. facial parts, sketches or expressions
Appl.No 17838316 Applicant Actimize LTD. Inventor Vaibhav Mishra

A computerized method for providing a sentiment score by evaluating expressions of participants during a video meeting is provided herein. The computerized method comprising: a Sentiment Analysis (SA) module. The SA module is: (i) retrieving one or more recordings of a video meeting from the database of video meeting recordings of each participant in the video meeting and associating the one or more recordings with a participant; (ii) dividing each retrieved recording into segments; (iii) processing the segments in a Facial Expression Recognition (FER) system to associate each segment with a timestamped sequence of expressions for each participant in the video meeting; and (iv) processing each segment in an Artificial Neural Network (ANN) having a dense layer, by applying a prebuilt and pretrained deep learning model, to yield a sentiment score for each statement for each participant.

8.3185590SPEECH ANALYSIS FOR MONITORING OR DIAGNOSIS OF A HEALTH CONDITION
CA 13.01.2022
Int.Class G10L 25/30
GPHYSICS
10MUSICAL INSTRUMENTS; ACOUSTICS
LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
25Speech or voice analysis techniques not restricted to a single one of groups G10L15/-G10L21/129
27characterised by the analysis technique
30using neural networks
Appl.No 3185590 Applicant NOVOIC LTD. Inventor WESTON, JACK
The invention relates to a computer-implemented method of training a machine learning model for performing speech analysis for monitoring or diagnosis of a health condition. The method uses training data comprising audio speech data and comprises obtaining one or more linguistic representations that each encode a sub-word, word, or multiple word sequence, of the audio speech data; obtaining one or more audio representations that each encode audio content of a segment of the audio speech data; combining the linguistic representations and audio representations into an input sequence comprising: linguistic representations of a sequence of one or more words or sub-words of the audio speech data; and audio representations of segments of the audio speech data, where the segments together contain the sequence of the one or more words or sub-words. The method further includes training a machine learning model using unsupervised learning to map the input sequence to a target output to learn combined audio- linguistic representations of the audio speech data for use in speech analysis for monitoring or diagnosis of a health condition.
9.20230255553SPEECH ANALYSIS FOR MONITORING OR DIAGNOSIS OF A HEALTH CONDITION
US 17.08.2023
Int.Class A61B 5/00
AHUMAN NECESSITIES
61MEDICAL OR VETERINARY SCIENCE; HYGIENE
BDIAGNOSIS; SURGERY; IDENTIFICATION
5Measuring for diagnostic purposes ; Identification of persons
Appl.No 18004848 Applicant Novoic Ltd. Inventor Jack Weston

The invention relates to a computer-implemented method of training a machine learning model for performing speech analysis for monitoring or diagnosis of a health condition. The method uses training data comprising audio speech data and comprises obtaining one or more linguistic representations that each encode a sub-word, word, or multiple word sequence, of the audio speech data; obtaining one or more audio representations that each encode audio content of a segment of the audio speech data; combining the linguistic representations and audio representations into an input sequence comprising: linguistic representations of a sequence of one or more words or sub-words of the audio speech data; and audio representations of segments of the audio speech data, where the segments together contain the sequence of the one or more words or sub-words. The method further includes training a machine learning model using unsupervised learning to map the input sequence to a target output to learn combined audio-linguistic representations of the audio speech data for use in speech analysis for monitoring or diagnosis of a health condition.

10.20120290514Methods for training saturation-compensating predictors of affective response to stimuli
US 15.11.2012
Int.Class G06F 15/18
GPHYSICS
06COMPUTING; CALCULATING OR COUNTING
FELECTRIC DIGITAL DATA PROCESSING
15Digital computers in general; Data processing equipment in general
18in which a program is changed according to experience gained by the computer itself during a complete run; Learning machines
Appl.No 13168965 Applicant Frank Ari M. Inventor Frank Ari M.

Described herein are methods for training a machine learning-based predictor of affective response to stimuli. The methods involve receiving samples comprising temporal windows of token instances to which a user was exposed, and target values representing affective response annotations of the user in response to the temporal windows of token instances. This data is used for the training of the predictor along with values indicative of the number of the token instances in the temporal windows of token instances, which are used to compensate for non-linear effects resulting from saturation of the user.