Processing

Please wait...

Settings

Settings

Goto Application

1. WO2020197899 - SELF-LEARNING DIGITAL ASSISTANT

Publication Number WO/2020/197899
Publication Date 01.10.2020
International Application No. PCT/US2020/023471
International Filing Date 19.03.2020
IPC
G06F 3/16 2006.01
GPHYSICS
06COMPUTING; CALCULATING OR COUNTING
FELECTRIC DIGITAL DATA PROCESSING
3Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
16Sound input; Sound output
H04W 4/02 2018.01
HELECTRICITY
04ELECTRIC COMMUNICATION TECHNIQUE
WWIRELESS COMMUNICATION NETWORKS
4Services specially adapted for wireless communication networks; Facilities therefor
02Services making use of location information
G10L 15/22 2006.01
GPHYSICS
10MUSICAL INSTRUMENTS; ACOUSTICS
LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
15Speech recognition
22Procedures used during a speech recognition process, e.g. man-machine dialog
CPC
G06F 3/017
GPHYSICS
06COMPUTING; CALCULATING; COUNTING
FELECTRIC DIGITAL DATA PROCESSING
3Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
01Input arrangements or combined input and output arrangements for interaction between user and computer
017Gesture based interaction, e.g. based on a set of recognized hand gestures
G06F 3/167
GPHYSICS
06COMPUTING; CALCULATING; COUNTING
FELECTRIC DIGITAL DATA PROCESSING
3Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
16Sound input; Sound output
167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
G06F 9/4881
GPHYSICS
06COMPUTING; CALCULATING; COUNTING
FELECTRIC DIGITAL DATA PROCESSING
9Arrangements for program control, e.g. control units
06using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
46Multiprogramming arrangements
48Program initiating; Program switching, e.g. by interrupt
4806Task transfer initiation or dispatching
4843by program, e.g. task dispatcher, supervisor, operating system
4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
G10L 15/22
GPHYSICS
10MUSICAL INSTRUMENTS; ACOUSTICS
LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
15Speech recognition
22Procedures used during a speech recognition process, e.g. man-machine dialogue
G10L 2015/223
GPHYSICS
10MUSICAL INSTRUMENTS; ACOUSTICS
LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
15Speech recognition
22Procedures used during a speech recognition process, e.g. man-machine dialogue
223Execution procedure of a spoken command
H04L 12/282
HELECTRICITY
04ELECTRIC COMMUNICATION TECHNIQUE
LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
12Data switching networks
28characterised by path configuration, e.g. local area networks [LAN], wide area networks [WAN]
2803Home automation networks
2816Controlling appliance services of a home automation network by calling their functionalities
282based on user interaction within the home
Applicants
  • MICROSOFT TECHNOLOGY LICENSING, LLC [US]/[US]
Inventors
  • MILLER, Adi
  • KARASSIK, Roni
  • AVIGDOR, Danny
Agents
  • MINHAS, Sandip S.
  • ADJEMIAN, Monica
  • BARKER, Doug
  • CHATTERJEE, Aaron C.
  • CHEN, Wei-Chen Nicholas
  • CHOI, Daniel
  • CHURNA, Timothy
  • DINH, Phong
  • EVANS, Patrick
  • GABRYJELSKI, Henry
  • GUPTA, Anand
  • HINOJOSA-SMITH, Brianna L.
  • HWANG, William C.
  • JARDINE, John S.
  • LEE, Sunah
  • LEMMON, Marcus
  • MARQUIS, Thomas
  • MEYERS, Jessica
  • ROPER, Brandon
  • SPELLMAN, Steven
  • SULLIVAN, Kevin
  • SWAIN, Cassandra T.
  • WALKER, Matt
  • WIGHT, Stephen A.
  • WISDOM, Gregg
  • WONG, Ellen
  • WONG, Thomas S.
  • ZHANG, Hannah
  • TRAN, Kimberly
Priority Data
16/368,65828.03.2019US
Publication Language English (EN)
Filing Language English (EN)
Designated States
Title
(EN) SELF-LEARNING DIGITAL ASSISTANT
(FR) ASSISTANT NUMÉRIQUE À AUTO-APPRENTISSAGE
Abstract
(EN)
A response activity pattern may be ascertained from a set of computing devices obtained during a monitoring mode of operation. The monitoring mode of operation can be initiated when a user command is determined to be a new or unknown command. The response activity pattern may be used to generate a response profile indicating an activity to carry out using one or more user devices to perform a task associated with the user command. When an indication of a previously unknown user command to perform a task is received, the generated response profile can be used to perform the designed task by carrying out the activity using the one or more user device. In this way, the activity that should be carried out to perform the task can be learned based on the monitored user activity related to the one or more user devices.
(FR)
Selon la présente invention, un modèle d'activité de réponse peut être déterminé à partir d'un ensemble de dispositifs informatiques obtenus pendant un mode de fonctionnement de surveillance. Le mode de fonctionnement de surveillance peut être déclenché lorsqu'une commande d'utilisateur est déterminée comme étant une commande nouvelle ou inconnue. Le modèle d'activité de réponse peut être utilisé pour générer un profil de réponse qui indique une activité à effectuer en utilisant un ou plusieurs dispositifs utilisateur pour effectuer une tâche associée à la commande utilisateur. Lorsqu'une indication d'une commande utilisateur précédemment inconnue pour effectuer une tâche est reçue, le profil de réponse généré peut être utilisé pour effectuer la tâche désignée en effectuant l'activité en utilisant le ou les dispositifs utilisateur. De cette manière, l'activité qui doit être effectuée pour effectuer la tâche peut être apprise sur la base de l'activité utilisateur surveillée connexe au ou aux dispositifs utilisateur.
Also published as
Latest bibliographic data on file with the International Bureau