Processing

Please wait...

Settings

Settings

Goto Application

Offices all Languages en Stemming true Single Family Member false Include NPL false
RSS feed can only be generated if you have a WIPO account

Save query

A private query is only visible to you when you are logged-in and can not be used in RSS feeds

Query Tree

Refine Options

Offices
All
Specify the language of your search keywords
Stemming reduces inflected words to their stem or root form.
For example the words fishing, fished,fish, and fisher are reduced to the root word,fish,
so a search for fisher returns all the different variations
Returns only one member of a family of patents
Include Non-Patent literature in results

Full Query

AIfunctionalapplicationsComputerVisionBiometrics

Side-by-side view shortcuts

General
Go to Search input
CTRL + SHIFT +
Go to Results (selected record)
CTRL + SHIFT +
Go to Detail (selected tab)
CTRL + SHIFT +
Go to Next page
CTRL +
Go to Previous page
CTRL +
Results (First, do 'Go to Results')
Go to Next record / image
/
Go to Previous record / image
/
Scroll Up
Page Up
Scroll Down
Page Down
Scroll to Top
CTRL + Home
Scroll to Bottom
CTRL + End
Detail (First, do 'Go to Detail')
Go to Next tab
Go to Previous tab

Analysis

1.20180365400BIOMETRIC AUTHENTICATION FOR CONNECTED VEHICLES INCLUDING AUTONOMOUS VEHICLES
US 20.12.2018
Int.Class G06F 21/32
GPHYSICS
06COMPUTING; CALCULATING OR COUNTING
FELECTRIC DIGITAL DATA PROCESSING
21Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
30Authentication, i.e. establishing the identity or authorisation of security principals
31User authentication
32using biometric data, e.g. fingerprints, iris scans or voiceprints
Appl.No 15705668 Applicant Brennan T. Lopez-Hinojosa Inventor Brennan T. Lopez-Hinojosa

A biometric authorization apparatus and method for a vehicle. A visor is connected to an interior cabin of a vehicle, and a biometric authentication interface is associated with the visor. The biometric authentication interface includes one or more biometric readers for scanning the biometric identifier(s) provided by a user. The biometric authentication interface facilitates analysis and processing of data associated with the biometric identifier(s) for use in authorizing the user with respect to the vehicle and optionally to also access an electronic system associated with the vehicle. The vehicle can be, for example, an autonomous vehicle and the user may be a passenger of the autonomous vehicle or in the case of a connected vehicle, a driver or a passenger. In some situations, the vehicle may be a rideshare vehicle and the user may be authorized (or not) for a rideshare trip in the vehicle.

2.12274503Myopia ocular predictive technology and integrated characterization system
US 15.04.2025
Int.Class A61B 3/14
AHUMAN NECESSITIES
61MEDICAL OR VETERINARY SCIENCE; HYGIENE
BDIAGNOSIS; SURGERY; IDENTIFICATION
3Apparatus for testing the eyes; Instruments for examining the eyes
10Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions
14Arrangements specially adapted for eye photography
Appl.No 18778027 Applicant COGNITIVECARE INC. Inventor Venkata Narasimham Peri

According to an embodiment, disclosed is a system comprising a processor wherein the processor is configured to receive an input data comprising an image of an ocular region of a user, clinical data of the user, and external factors; extract, using an image processing module comprising adaptive filtering techniques, ocular characteristics, combine, using a multimodal fusion module, the input data to determine a holistic health embedding; detect, based on a machine learning model and the holistic health embedding, a first output comprising likelihood of myopia, and severity of myopia; predict, based on the machine learning model and the holistic health embedding, a second output comprising an onset of myopia and a progression of myopia in the user; and wherein the machine learning model is a pre-trained model; and wherein the system is configured for myopia prognosis powered by multimodal data.

3.20220180975METHODS AND SYSTEMS FOR DETERMINING GENE EXPRESSION PROFILES AND CELL IDENTITIES FROM MULTI-OMIC IMAGING DATA
US 09.06.2022
Int.Class G16B 40/30
GPHYSICS
16INFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
BBIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
40ICT specially adapted for biostatistics; ICT specially adapted for bioinformatics-related machine learning or data mining, e.g. knowledge discovery or pattern finding
30Unsupervised data analysis
Appl.No 17553691 Applicant The Broad Institute, Inc. Inventor Aviv Regev

The present disclosure relates to systems and method of determining transcriptomic profile from omics imaging data. The systems and methods train machine learning methods with intrinsic and extrinsic features of a cell and/or tissue to define transcriptomic profiles of the cell and/or tissue. Applicants utilize a convolutional autoencoder to define cell subtypes from images of the cells.

4.11978438Machine learning model updating
US 07.05.2024
Int.Class G10L 15/18
GPHYSICS
10MUSICAL INSTRUMENTS; ACOUSTICS
LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
15Speech recognition
08Speech classification or search
18using natural language modelling
Appl.No 17215383 Applicant Amazon Technologies, Inc. Inventor Anil K. Ramakrishna

Techniques for updating a machine learning (ML) model are described. A device or system may receive input data corresponding to a natural or non-natural language (e.g., gesture) input. Using a first ML model, the device or system may determine the input data corresponds to a data category of a plurality of data categories. Based on the data category, the device or system may select a ML training type from among a plurality of ML training types. Using the input data, the device or system may perform the selected ML training type with respect to a runtime ML model to generate an updated ML model.

5.20140188462System and method for analyzing ambiguities in language for natural language processing
US 03.07.2014
Int.Class G06F 17/00
GPHYSICS
06COMPUTING; CALCULATING OR COUNTING
FELECTRIC DIGITAL DATA PROCESSING
17Digital computing or data processing equipment or methods, specially adapted for specific functions
Appl.No 14201974 Applicant Zadeh Lotfi A. Inventor Zadeh Lotfi A.

Specification covers new algorithms, methods, and systems for artificial intelligence, soft computing, and deep learning/recognition, e.g., image recognition (e.g., for action, gesture, emotion, expression, biometrics, fingerprint, facial, OCR (text), background, relationship, position, pattern, and object), large number of images (“Big Data”) analytics, machine learning, training schemes, crowd-sourcing (using experts or humans), feature space, clustering, classification, similarity measures, optimization, search engine, ranking, question-answering system, soft (fuzzy or unsharp) boundaries/impreciseness/ambiguities/fuzziness in language, Natural Language Processing (NLP), Computing-with-Words (CWW), parsing, machine translation, sound and speech recognition, video search and analysis (e.g. tracking), image annotation, geometrical abstraction, image correction, semantic web, context analysis, data reliability (e.g., using Z-number (e.g., “About 45 minutes; Very sure”)), rules engine, control system, autonomous vehicle, self-diagnosis and self-repair robots, system diagnosis, medical diagnosis, biomedicine, data mining, event prediction, financial forecasting, economics, risk assessment, e-mail management, database management, indexing and join operation, memory management, and data compression.

6.20210170590Systems and methods automatic anomaly detection in mixed human-robot manufacturing processes
US 10.06.2021
Int.Class B25J 9/16
BPERFORMING OPERATIONS; TRANSPORTING
25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; HANDLES FOR HAND IMPLEMENTS; WORKSHOP EQUIPMENT; MANIPULATORS
JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
9Programme-controlled manipulators
16Programme controls
Appl.No 16705602 Applicant Mitsubishi Electric Research Laboratories, Inc. Inventor Emil Laftchiev

A system for detecting an anomaly in an execution of a task in mixed human-robot processes. Receiving human worker (HW) signals and robot signals. A processor to extract from the HW signals, task information, measurements relating to a state of the HW, and input into a Human Performance (HP) model, to obtain a state of the HW based on previously learned boundaries of the state of the HW, the state of the HW is then inputted into a Human-Robot Interaction (HRI) model, to determine a classification of an anomaly or no anomaly. Update HRI model with robot operation signals, HW signals and classified anomaly, determine a control action of a robot interacting with the HW or a type of an anomaly alarm using the updated HRI model and classified anomaly. Output the control action of the robot to change a robot action or output the type of the anomaly alarm.

7.WO/2021/112256SYSTEMS AND METHODS FOR AUTOMATIC ANOMALY DETECTION IN MIXED HUMAN-ROBOT MANUFACTURING PROCESSES
WO 10.06.2021
Int.Class B25J 9/16
BPERFORMING OPERATIONS; TRANSPORTING
25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; HANDLES FOR HAND IMPLEMENTS; WORKSHOP EQUIPMENT; MANIPULATORS
JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
9Programme-controlled manipulators
16Programme controls
Appl.No PCT/JP2020/045361 Applicant MITSUBISHI ELECTRIC CORPORATION Inventor LAFTCHIEV, Emil
A system for detecting an anomaly in an execution of a task in mixed human- robot processes. Receiving human worker (HW) signals and robot signals. A processor to extract from the HW signals, task information, measurements relating to a state of the HW, and input into a Human Performance (HP) model, to obtain a state of the HW based on previously learned boundaries of the state of the HW, the state of the HW is then inputted into a Human-Robot Interaction (HRI) model, to determine a classification of an anomaly or no anomaly. Update HRI model with robot operation signals, HW signals and classified anomaly, determine a control action of a robot interacting with the HW or a type of an anomaly alarm using the updated HRI model and classified anomaly. Output the control action of the robot to change a robot action or output the type of the anomaly alarm.
8.20200137357Wireless augmented video system and method to detect and prevent insurance billing fraud and physical assault for remote mobile application
US 30.04.2020
Int.Class H04N 7/18
HELECTRICITY
04ELECTRIC COMMUNICATION TECHNIQUE
NPICTORIAL COMMUNICATION, e.g. TELEVISION
7Television systems
18Closed-circuit television systems, i.e. systems in which the video signal is not broadcast
Appl.No 16170078 Applicant Michael Kapoustin Inventor Michael Kapoustin

The present invention discloses a wireless augmented video system and method to monitor medical insurance billing fraud, drug or other theft and elder patient or child abuse by the caregivers. The wireless augmented system comprises; a wireless augmented monitoring devices, a cloud system monitoring center and a caregiver employing agency. The wireless wearable augmented monitoring devices further includes a smart wearable nametag apparatus and smart wearable wristband. A wireless cellular transceiver configured within the nametag apparatus to stream the augmented video stream in response to an event detected. The nametag apparatus also incorporates a memory element to buffer the augmented data prior to transmission, a SIM card to connect to any data cellular network, a Bluetooth to connect peripheral devices, a Wi-Fi component, color LCD screen for displaying current caregiver's name and photograph on the name tag, and LED power status display, a microphone, speaker and a micro USB port.

9.WO/2023/059663SYSTEMS AND METHODS FOR ASSESSMENT OF BODY FAT COMPOSITION AND TYPE VIA IMAGE PROCESSING
WO 13.04.2023
Int.Class A61B 5/00
AHUMAN NECESSITIES
61MEDICAL OR VETERINARY SCIENCE; HYGIENE
BDIAGNOSIS; SURGERY; IDENTIFICATION
5Measuring for diagnostic purposes ; Identification of persons
Appl.No PCT/US2022/045706 Applicant THE BROAD INSTITUTE, INC. Inventor KHERA, Amit
The subject matter disclosed herein relates to utilizing the silhouette of an individual to measure body fat volume and distribution. Particular examples relates to providing a system, a computer-implemented method, and a computer program product to utilize a binary outline, or silhouette, to predict the individual's fat depot volumes with machine learning models.
10.20210192221System and method for detecting deception in an audio-video response of a user
US 24.06.2021
Int.Class G06K 9/00
GPHYSICS
06COMPUTING; CALCULATING OR COUNTING
KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
9Methods or arrangements for recognising patterns
Appl.No 16722083 Applicant RTScaleAI Inc Inventor Vivek Iyer

A method for (of) detecting deception in an Audio-Video response of a user, using a server, in a distributed computing architecture, characterized in that the method including: enabling an Audio-Video connection with a user device upon receiving a request from a user; obtaining, from the user device, an Audio-Video response of the user corresponding to a first set of questions that are provided to the user by the server; extracting audio signals and video signals from the Audio-Video response; detecting an activity of the user by determining a plurality of Natural Language Processing (NLP) features from the extracted audio signals by (i) performing a speech to text translation and (ii) extracting the plurality of NLP features from the translated text, and determining a plurality of speech features from the extracted audio signals by (i) splitting the extracted audio signals into a plurality of short interval audio signals and (ii) extracting the plurality of speech features from the plurality of short interval audio signals; aggregating (i) the plurality of NLP features to obtain a plurality of temporal NLP features and (ii) the plurality of speech features to obtain a plurality of temporal speech features; aggregating the plurality of temporal NLP features and the plurality of temporal speech features to obtain first temporal aggregated features; detecting a plurality of micro-expressions of the user by splitting extracted video signals into a plurality of short fixed-duration video signals, detecting a plurality of Region Of Interest (ROI) in the plurality of short fixed-duration video signals, and comparing the plurality of detected ROI with video signals annotated with micro-expression labels that are stored in a database to detect the plurality of micro-expressions of the user in the plurality of short fixed-duration video signals; tracking and determining a gesture of the user from the extracted video signals; aggregating the plurality of micro-expressions and the gesture of the user to obtain second temporal aggregated features; aggregating the first temporal aggregated features and the second temporal aggregated features to obtain final temporal aggregated features; and detecting, using a machine learning model, a deception in the Audio-Video response based on the final temporal aggregated features.