Processing

Please wait...

Settings

Settings

Goto Application

1. WO2020072882 - LEVERAGING MULTIPLE DEVICES TO ENHANCE SECURITY OF BIOMETRIC AUTHENTICATION

Note: Text based on automatic Optical Character Recognition processes. Please use the PDF version for legal matters

[ EN ]

LEVERAGING MULTIPLE DEVICES TO ENHANCE SECURITY OF

BIOMETRIC AUTHENTICATION

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application claims priority to U.S. Provisional Patent Application No. 62/741,431, entitled “Leveraging Multiple Devices To Enhance Security Of Biometric Authentication” and filed on October 4, 2018, which is hereby incorporated by reference in its entirety for all purposes.

BACKGROUND

[0002] User authentication is a critical component of today's Internet applications. Users login to email applications to send/receive emails and increasingly to enable a second authentication factor for other applications; to bank websites to check balances and execute transactions; and to payment apps to transfer money between family and friends. User authentication is also an integral part of any enterprise access control mechanism that administers access to sensitive data, services and resources.

[0003] For the last three decades, password-based authentication has been the dominant approach for authenticating users, by relying on“what users know”. But this approach can have security and usability issues. First, in password-based authentication, the servers store a function of all passwords and hence can be susceptible to breaches and offline dictionary attacks. Indeed, large-scale password breaches in the wild are extremely common. Moreover, password-based authentication is more susceptible to phishing as the attacker only needs to capture the password which serves as a persistent secret credential in order to impersonate users.

[0004] Passwords can also pose challenging usability problems. High entropy passwords are hard to remember by humans, while low entropy passwords provide little security, and research has proven that introducing complex restrictions on password choices can backfire. Passwords may also be inconvenient and slow to enter, especially on mobile and Internet of Things devices that dominate user activities on the Internet.

[0005] There are major ongoing efforts in the industry to address some of these issues. For example,“unique” biometric features such as finger-print (e.g. Google Pixel's fingerprint [1]), facial scans (e.g. Face-ID used in Apple iPhone [2]), and iris scans (e.g. Samsung Galaxy phones) are increasingly popular first or second factor authentication mechanisms for logging into devices, making payments and identifying to the multitude of applications on consumer devices Studies show that biometrics are much more user-friendly, particularly on mobile devices as users may not have to remember or enter any secret information.

[0006] Moreover, the industry is shifting away from transmitting or storing persistent user credential s/secrets on the server-side. This can significantly reduce the likelihood of scalable attacks such as server breaches and phishing. For example, biometric templates and measurements can be stored and processed on the client side where the biometric matching also takes place. A successful match then unlocks a private signing key for a digital signature scheme (i.e. a public key credential) that is used to generate a token on various information such as a fresh challenge, the application's origin and some user information. Only a one-time usable digital signature is transmitted to the server who stores and uses a public verification key to verify the token and identify the user.

[0007] This is the approach taken by the FIDO Alliance [3], the world's largest industry wide effort to enable an interoperable ecosystem of hardware-, mobile- and biometrics-based authenticators that can be used by enterprises and service providers. The framework is also widely adopted by major Internet players and built-into all browser implementations such as most recent versions of Chrome, Firefox, and Edge in form of a W3C standard WebAuthn API.

[0008] With biometrics and private keys stored on client devices, a primary challenge is to securely protect them. This is particularly crucial with biometrics since unlike passwords they are not replaceable. The most secure approach for doing so relies on hardware solutions such as secure enclaves and trusted execution environments that provide both physical and logical separation between various applications and the secrets. But hardware solutions are not always available on devices (e.g. not all mobile phones or IoT devices are equipped with secure elements), or can be costly to support at scale. Moreover, they provide very little programmability to developers and innovators. For example, programming a new biometric authentication solution into a Secure Element uses support from all parties involved such as OEMs and OS developers.

[0009] Software-based solutions such as white-box cryptography are often based on ad-hoc techniques that are regularly broken. The provably secure alternative, i.e. cryptographic obfuscation, is extremely inefficient and the community's confidence on its mathematical foundation is lacking. An alternative approach is to apply the“salt-and-hash” techniques often used to protect passwords to biometric templates before storing them on the client device. A naive salt-and-hash solution can fail since biometric matching is often a fuzzy match that checks whether the distance between two vectors is above a threshold or not.

[0010] A better way of implementing the hash-and-salt approach is via a powerful primitive known as fuzzy extractor [4] Unfortunately, this is still susceptible to offline dictionary attacks on the biometric (trying different biometric measurements until we find one that generates the correct public key), and does not solve the problem of protecting the signing key which is reconstructed in memory during each authentication session. Moreover, existing fuzzy extractor solutions do not support the wide range of distance metrics (e.g. cosine similarity, Euclidean distance, etc.) and the necessary accuracy level needed in today's practical biometric matching.

[0011] Embodiments of the present disclosure address these and other issues individually and collectively.

BRIEF SUMMARY

[0012] Some embodiments of the present disclosure are directed to methods of using biometric information to authenticate a first electronic device of a user to a second electronic device. The first electronic device can store a first key share of a private key, wherein the second electronic device stores a public key associated with the private key, and wherein one or more other electronic devices of the user store other key shares of the private key. The first device can store a first template share of a biometric template of the user, wherein the one or more other electronic devices of the user store other template shares of the biometric template. When the first device receives a challenge message from the second electronic device, it can measure, by a biometric sensor, a set of biometric features of the user to obtain a measurement vector comprised of measured values of the set of biometric features, wherein the biometric template includes a template vector comprised of measured values of the set of biometric features previously measured from the user. The first device can send the measurement vector and the challenge message to the one or more other electronic devices. The first device can generate a first partial computation using the first template share, the first key share, and the challenge message and receive at least T other partial computations from the one or more other electronic devices, wherein each of the at least T other partial computations are generated using a respective template share, a respective key share, and the challenge message. The first device can generate a signature of the challenge message using the first partial computation and the at least T other partial computations and send the signature to the second electronic device.

[0013] Some embodiments of the invention are directed to generating partial computations with particular distance measures, while another embodiment is directed to generating partial computations with any suitable distance measure.

[0014] Other embodiments of the invention are directed to systems and computer readable media associated with the above-described methods.

[0015] These and other embodiments of the invention are described in further detail below with reference to the Figures and the Detailed Description.

BRIEF DESCRIPTION OF THE DRAWINGS

[0016] FIG. 1 A and 1B show an overview of the FIDO ETniversal Authentication

Framework flow.

[0017] FIG. 2 shows high-level device registration flow according to some embodiments

[0018] FIG. 3 shows high-level device authentication flow according to some embodiments

[0019] FIG. 4 shows a general flow diagram for fuzzy threshold signature generation according to some embodiments.

[0020] FIG. 5 shows functionality TOiFuz according to some embodiments.

[0021] FIG. 6 shows a circuit according to embodiments of the present invention.

[0022] FIG. 7 shows another circuit according to embodiments of the present invention.

[0023] FIG. 8 shows a block diagram of an example computer system usable with systems and methods according to embodiments of the present invention.

DETAILED DESCRIPTION

[0024] Embodiments of the disclosure can be motivated by the fact that most users own and carry multiple devices, such as their laptop, smartphone, and smartwatch, and have other Internet of Things devices around when authenticating such as their smart TV or smart-home appliances. Embodiments provide a new framework for client-side biometric-based authentication with the goal of distributing both the biometric templates as well as the secret signing key among multiple devices who collectively perform the biometric matching and signature generation without ever reconstructing the template or the signing key on one device. This framework may be referred to as Fuzzy Threshold Token Generation (FTTG). FTTG can also be used to protect biometric information on the server-side by distributing it among multiple servers who perform the matching and token generation (e.g. for a single sign-on authentication token) in a fully distributed manner.

[0025] We initiate a formal study of FTTG and introduce a number of concrete protocols secure within the framework. In protocols according to embodiments, during a one-time registration phase, both the template and the signing key are distributed among n devices, among which any t of them can generate tokens. The exact values of n and t are parameters of the scheme and vary across different protocols. Then, during each authentication session, the initiating device obtains the biometric measurement and exchanges a constant number of messages with t— 1 other devices in order to verify that the biometric measurement is close enough, with respect to some distance measure, to the secret shared template and if yes, obtain a token, which may also be referred to as a digital signature, on a message chosen by the initiating parties.

[0026] We formally define Fuzzy Threshold Token Generation schemes. We provide a unified definition that captures both privacy of biometrics and unforgeability of tokens in the distributed setting against a malicious adversary. Our definitions follow the real-ideal paradigm but may use a standalone setting for efficiency and simplicity.

[0027] We propose a four round protocol for any distance function based on any two-round UC-secure multi-party computation protocol that may use a broadcast channel (for example [5, 6, 7, 8, 9]). This protocol works for any n and t (< n) and tolerates up to t— 1 malicious corruptions. Note that a generic application of constant-round MPC protocols may not meet the important consideration that every contacted party only needs to exchange messages with the single initiating party. This protocol is a feasibility result as the resulting protocol is not black-box. To obtain protocols with concrete efficiency, embodiments can address the most popular distance functions used for biometric authentication, including Hamming distance

(used for iris scans ), cosine similarity and Euclidean distance (both of which are used for face recognition ).

[0028] For cosine similarity, embodiments can include a very efficient four round protocol which supports any n with threshold t = 3 and is secure against the corruption of one party. The protocol can combine techniques such an additively homomorphic encryption (AHE) scheme and associated NIZKs, with garbled circuit techniques to obtain a hybrid protocol wherein arithmetic operations (e.g. inner products) are performed using the AHE and non arithmetic operations (e.g. comparison) take place in a small garbled circuits. The same construction easily extends to one for Euclidean distance.

I. INTRODUCTION

A. FIDO Universal Authentication Framework

[0029] Let us now take a closer look into FIDO Alliance architecture [3] that proposes a way to authenticate a device to an authentication server. FIG. 1 A shows a high level flowchart for typical device registration.

[0030] In step 102, a user device 110 can generate a public key and a secret key for an asymmetric encryption scheme. Alternatively, the public key and the secret key may be provisioned to the user device 110.

[0031] In step 104, the user’s device 110 can send the public key to an authentication server 140. The authentication server 140 can store the user’s public key and register the device. The user’s device 110 may securely store the secret key, such as on a hardware security module. The user device 110 may secure the secret key with additional protections, such as with a previously entered biometric (e.g., a fingerprint of the user, a facial scan of the user).

[0032] FIG. 1B shows a high level flowchart for subsequent authentication of a registered device. The protocol may be initiated when the user’s device 110 attempts to access a secure or protected resource. For example, the authentication server 140 may control access to a secure database. As another example, the authentication server 140 may authenticate the user before initiating a transaction.

[0033] In step 106, the authentication server 140 can send a challenge to the user device 110. The challenge may be a message, and may be sent in response to the user device 110 initiating an access attempt.

[0034] In step 108, the user’s device 110 can sign the challenge using the secret key and send the signed challenge back to the authentication server 140. For example, the user device 110 may sign the challenge by encrypting the challenge message with the secret key. As another example, the user device 110 may generate a cryptogram comprising information in the challenge and the secret key. The authentication server 140 can use the previously provided public key to verify the challenge. For example, the authentication server 140 can use the public key to decrypt the signed message and determine if the decrypted message matches the challenge. If the signature is valid, then the user may be allowed access to the resource.

[0035] The user can make this process more secure with the use of a biometric. During the registration phase the user can enter biometric data into the user device 110. For example, the user may use a camera of the user device 110 to take a picture of their face. A biometric template (e.g., a facial scan) can be extracted from the biometric data and stored“securely” inside the user device 110. At the same time, the secret key/public key pair can be generated. The public key can be communicated with the authentication server 140 whereas the secret key is securely stored in the user device 110. The secret key is virtually“locked” with the template. Later, during sign-on, a candidate template is used to unlock the secret key, which can then used in a standard identification protocol.

[0036] FIDO specification emphasizes that during sign-on approximate matching takes place inside the device“securely”. One popular way to instantiate that is to use secure elements (for example Apple iPhones [2]): the template is stored inside the secure elements (SE) (for example hardware security modules) at all times along with other sensitive data. On input a candidate template, an approximate matching is performed inside the SE. However using SE has a number of drawbacks: SEs are usually expensive, platform-specific, offers limited programmability and are not easily updatable (as updating may require changes in hardware). Furthermore they are subject to side-channel attacks, which can be executed on (for example) a stolen device. Therefore providing an efficient, provably secure, flexible, software-only solution is very important.

B. Security considerations

[0037] There are certain security constraints that are relevant to the invention. The biometric template should not ever be fully reconstructed, so that if any device is

compromised the template is not revealed. The secret key also should never be fully reconstructed, for similar reasons. The distributed matching should work as long as more than a certain threshold of devices in the network are not compromised.

[0038] The secondary devices also should not learn if the biometric matching was successful. This increases security because if the other devices do not learn the result of the biometric matching, they may not be able to leak information, or have information to leak.

Say if one of the other devices is compromised, a malicious party cannot use the information about the biometric matching. The secondary devices also don’t learn if signature generation was successful. For example, if a device has been compromised but can still participate in the authentication, the hacker would not be able to learn if the information that they have about the biometric template or the secret share is accurate or valid.

[0039] It is important to note that other participating devices need not be required to talk to each other or even know each other (all messages are exchanged between the initiating device and other participants). In a typical usage scenario, one or two primary user devices (e.g. a laptop or a smartphone) play the role of the initiating device and all other devices are only paired/connected to the primary device and may not even be aware of the presence of other devices in the system. This makes the design of efficient and constant-round FTTG protocol significantly more challenging. We assume that devices that are connected, have established a point-to-point authenticated channel (but not a broadcast channel).

C. Overview

1. Device registration

[0040] FIG. 2 shows a general overview of a device registration process with a distributed biometric according to embodiments.

[0041] In step 202, a user 205 can enter a biometric measurement into a primary user device 215. A biometric sensor of the primary user device 215 (e.g., a camera, a microphone, a fingerprint sensor) to measure a set of biometric features of the user 205. For example, the user 205 can use a camera of the primary user device 215 to take a picture of their face for a facial scan. Other examples of biometric measurements may include voice recordings, iris scans, and fingerprint scans. The role of the primary device may also performed by any trusted authority which may not be a device of the user. As an example, the primary device may be a computer of an authentication system.

[0042] In step 204, the primary user device 215 can generate a public key and a private key for an asymmetric encryption scheme. The primary user device 215 can also use the biometric measurement to create a biometric template. The biometric template may include a template vector, and the template vector may comprise measured values of the set of biometric features of the user. For example, the primary user device 215 may compute distances between facial features identified in the picture of the user’s face. The computed distances may comprise the template vector of the biometric template.

[0043] In step 206, the primary user device 215 can generate shares of the private key and the template, as well as any other parameters that might be needed for later authentication. The primary user device 215 can then store a first key share of the private key. The user device may also store a first template share of the biometric template.

[0044] In step 208, the primary user device 215 can send other key shares of the private key, other template shares of the template, and other parameters to each of a plurality of the user’s other user devices 225, 235. Other devices of the user may include, for example, laptops, smartphones, wearable devices, smart TVs, IoT connected devices, etc. Two other devices are shown in FIG. 2, however, embodiments may comprise more or fewer devices associated with the primary user device 215. Each of the other user devices 225, 235 may store the key share of the private key, the template share of the template, and the other parameters. The other user devices 225, 235 may or may not store the received information in secure storage.

[0045] In step 210, the primary user device 215 can send the public key to an

authentication server 245. The authentication server 245 can securely store the public key. The public key may be stored with an identifier of the user 205 and/or the primary user device 215. The primary user device 215 may now be registered.

2. Device authentication

[0046] FIG. 3 shows a general overview of device authentication with a distributed biometric when the primary user device 215 attempts to access a secure or protected resource.

For example, the primary user device 215 can attempt to access a secure database controlled by the authentication server 245.

[0047] In step 302, the authentication server 245 can send a challenge message to the primary user device 215. The challenge message may be a vector.

[0048] In step 304, the user 205 can enter a biometric measurement into the primary user device 215. A biometric sensor of the primary user device 215 (e.g., a camera, a microphone, a fingerprint sensor) to measure a set of biometric features of the user 205. For example, the user 205 may use a camera of the primary user device 215 to take a picture of their face to be used in a facial scan. As another example, the user 205 may use a sensor of the primary user device 215 to scan their fingerprint. The primary user device 215 may then generate a measurement vector that comprises measured values of the set of biometric features. For example, the measured values may be computed distances between facial features of the user.

[0049] In step 306, the primary user device 215 and other user devices 225, 235 can match the previously distributed biometric template with the new biometric measurement. The primary user device 215 can send the measurement vector and the challenge message to the other devices. Each user device 215, 225, 235 may generate a partial computation with their template share and the measurement vector. For example, an inner product may be computed between the template share and the measurement vector. In embodiments steps 306 and 308 may occur concurrently.

[0050] In step 308, the primary user device 215 and other user devices 225, 235 can sign the challenge message with the key shares of the secret key. Each user device 215, 225, 235 may generate a partial computation with the challenge message and a respective key share. The partial computation may be, for example, a partial encryption of the challenge message with the respective key share. The partial computation may also be generated with the result of the partial computation from the respective template share and the measurement vector. After generating each partial computation, each user device 225, 235 can send the partial computation to the primary user device 215. After receiving the partial computations, the primary user device 215 can generate a signature of the challenge message using the partial computations. The primary user device 215 may need to receive partial computations from a threshold number, T, of other devices. A first partial computation generated by the primary user device 215 may be one of the at least T partial computations. For example, the primary user device 215 may need to receive at least three partial computations, and may receive the partial computations from two other devices. The threshold may be lower than the total number of other devices that received key shares and template shares, so that a signature can still be generated if one or more of the other devices are compromised.

[0051] In step 310, the primary user device 215 can send the signature to the authentication server 245. The primary user device 215 may also send the challenge message to the authentication server. If a valid signature is generated, then the authentication server 245 will be able to use the previously provided public key to verify the signature and allow the user access to the resource. If a valid signature is not generated then the device will not be authenticated and the user will not gain access to the resource.

II. TECHNICAL OVERVIEW

[0052] We first briefly describe the notion of a Fuzzy Threshold Token Generation scheme.

[0053] Consider a set of n parties and a distribution W over vectors e lq . Let’s denote Dist to be the distance measure under consideration. Initially, in a registration phase, a biometric template w
is sampled and secret shared amongst all the parties. Also, the setup of an unforgeable threshold signature scheme is implemented and the signing key is secret shared amongst the n parties. This is followed by a setup phase where suitable public and secret parameters are sampled by a trusted party and distributed amongst the n parties. Then, in the online authentication session (or sign-on phase), any party P* with input vector u, that wishes to generate a token on a message m can interact with any set S consisting of t parties (including itself) to generate a token (signature) on the message m using the threshold signature scheme. Note that P* gets a token only if Dist(u, w) > d where d is parameterized by the scheme. We recall that in this phase, the parties in set S are not allowed to talk to each other. In particular, the communication model only involves party P* to interact individually with each party in S.

[0054] The security definition for a Fuzzy Threshold Token Generation (FTTG) scheme captures two properties: privacy and unforgeability. Informally, privacy says that the long term secrets - namely, the biometric template w and the signing key of the threshold signature scheme should be completely hidden from every party in the system. In preferred

embodiments, the online input vector u used in each authentication session should be hidden from all parties except the initiator. Unforgeability requires that no party should be able to generate a token on any message m without participating in an authentication session as the initiator interacting with at least t other parties, using message m and an input vector u such that Dist(u, w) > d. We formalize both these properties via a Real-Ideal security game against a malicious adversary. We refer the reader to Section V for the detailed definition including a discussion on the various subtleties involved in formally defining the primitive to achieve meaningful security while still being able to build efficient protocols.

[0055] FIG. 4 shows a swim-lane flow diagram for a general embodiment of fuzzy threshold signature generation. FIG. 4 can be used for implementing authentication using biometric measurements, e.g., of one or more fingerprints, an eye, and the like.

[0056] In step 402, a first device 405 stores a first share of a private key, a second device 415 stores a public key associated with the private key, and other devices 425 store other shares of the private key. The private key and public key are keys can be keys generated by a trusted authority for a secret key encryption scheme. The trusted authority may be the first device. The second device may be an authorization server.

[0057] In step 404, the first device 405 stores the first share of a biometric template and the other devices 425 store other shares of the biometric template. The biometric template includes a template vector comprised of measured values of the set of biometric features previously measured from the user of the first device. The biometric template may have been generated by the trusted authority. In some embodiments, the biometric template may not be divided into shares. The first template share and the other template shares may all be the same value, and may be an encryption of the biometric template.

[0058] In step 406, the second device 415 sends a challenge message to the first device 405. This message may have been sent in response to the first device 405 requesting authentication in order to access some secure or protected resource. The challenge message may, for example, be a challenge message associated with the FIDO Universal

Authentication Framework.

[0059] In step 408, the first device 405 obtains a biometric measurement from the user of the device. The biometric measurement can be obtained by a biometric sensor of the first device 405. The first device 405 may use the biometric sensor to measure a set of biometric features of the user to obtain a measurement vector comprised of measured values of the set of biometric features. For example, the biometric sensor may be a camera that takes a picture of a user’s face. The first device may then measure distances between identified facial

features in the picture to determine the biometric features. In some embodiments, the biometric measurement may be encrypted.

[0060] In step 410, the first device 405 sends the challenge message and the biometric measurement to the other devices 425. The first device 405 may also send other information or computations needed for the other devices 425 to generate partial computations. In some embodiments, the biometric measurement may be encrypted by the first device 405 before it is sent to the other device 425. In other embodiments, the biometric measurement may not be sent to the other devices 425 with the challenge message.

[0061] In step 412, the first device 405 can generate a first partial computation using the challenge message, biometric measurement, and/or the first shares of the private key and template. The other devices 425 can generate other partial computations with the challenge message, biometric measurement, and/or their respective key share and respective template share. The first device 405 and other devices 425 may also make use of information provided by the trusted authority, such as keys for a pseudorandom function. The partial computations can be computations to determine if the template and measurement match, according to a pre-established distance measure. Thus the partial computations may be partial distances. The partial distances may be encrypted with an additively homomorphic encryption scheme, which may be a threshold fully homomorphic encryption (TFHE) scheme. The TFHE can allow the first device 405 and the other devices 425 to compute the partial computations with both additions and multiplications of encrypted values, without decrypting the values. The devices can also generate partial computations of a token that can be used to generate a signature of the challenge message. In some embodiments, the devices can partially decrypt a partial computation (or a partial distance).

[0062] In step 414, the other devices 425 send the partial computations to the first device 405. The first device 405 receives at least T partial computations, where T is a threshold for the distributed signature generation. The first partial computation may be one of the at least T partial computations. The first device 405 may receive zero knowledge proofs along with each partial computation. The zero knowledge proofs may be used to verify the at least T partial computations.

[0063] In step 416, the first device 405 uses the partial computations to generate a signature (e.g., a token) of the challenge message. For example, the first device 405 may evaluate the partial computations to receive partial signatures (e.g., shares of the token), which it can then combine to generate a complete signature. In some embodiments, the partial signatures may be encrypted, such as with an additively homomorphic encryption scheme. The first device 405 may add shares of additively homomorphic encryptions of partial distances between the template shares and the measurement vector to obtain a total distance between the

measurement vector and the template vector. The total distance can be compared to a threshold, and if the total distance is less than the threshold, the first device 405 can generate the signature and sign the challenge message. In this way, the generation of a complete token can be tied to the success of the biometric computations. In some embodiments, a zero knowledge proof can be used by the other devices to verify the comparison of the total distance to the threshold. A zero knowledge proof can also be used by the other devices or the first device 405 to verify the at least T partial computations.

[0064] In some embodiments, in order to generate the signature, the first device 405 may generate a cryptographic program. The cryptographic program may conditionally use a the set of keys to generate the signature when the measurement vector is within a threshold of the template vector. For example, the cryptographic program may generate the signature if a cosine similarity measure between the measurement vector and the template vector is less than a threshold. In some embodiments, the cryptographic program may include a garbled circuit. The garbled circuit may perform the steps of adding partial distances and comparing the total distance to the template. In some embodiments, the first device 405 may input the measurement vector into the garbled circuit, and thus the measurement vector can be compared to the template vector without the first device 405 sending the measurement vector to the other devices 425. The cryptographic program may also use properties of additively homomorphic encryption to determine if the measurement vector is within the threshold of the template vector. In some embodiments, the cryptographic program may reconstruct the biometric template using the template shares. In some embodiments, the garbled circuit may output a string that can be used to decrypt the partial signatures. In some embodiments, the cryptographic program may reconstruct the biometric template using the template shares in order to compute the distance measure.

[0065] In step 418, the first device 405 sends the signed challenge message to the second device 415. The second device 415 can use the stored public key to verify the signature on the challenge message. If the signature is valid, the first device 405 can be authenticated to the second device 415.

A. General Purpose

[0066] We now describe the techniques used in a four round Fuzzy Threshold Token Generation scheme that works for any distance measure and is malicious secure against an adversary that can corrupt up to (t— 1) parties.

[0067] Our starting point is the observation that suppose all the parties could freely communicate in our model, then any r round malicious secure multiparty computation (MPC) protocol with a broadcast channel would also directly imply an r round Fuzzy Threshold Token Generation scheme if we consider the following functionality: the initiator P* has input (m, S, u), every party P t E S has input (m, S) and their respective shares of the template w and the signing key. The functionality outputs a signature on m to party P* if Dist(u, w) > d and |<5Ί = t. Recently, several elegant works [5, 6, 7, 8, 9, 11] have shown how to construct two round UC-secure MPC protocols in the CRS model with a broadcast channel from standard assumptions. However, since the communication model of our FTTG primitive does not allow all parties to interact amongst each other, our goal now is to emulate a two round MPC protocol p in our setting.

[0068] For simplicity, let’s first consider n = t = 3. That is, there are three parties:

P1 P2, p3. Consider the case when Px is the initiator. Now, in the first round of our FTTG scheme, Px sends m to both parties and informs them the set S that it is going to be communicating with involves both of them. Then, in round 2, we have P2 and P3 send their round one messages of the MPC protocol p. In round 3 of our FTTG scheme, Px sends its own round one message of the MPC protocol to both parties. Along with this, Px also sends P2’s round one message to P3 and vice versa. So now, at the end of round 3 of our FTTG scheme, all parties have exchanged their first round messages of protocol p.

[0069] Our next observation is that since we care only about Px getting output, in the underlying protocol p , only party Px needs to receive everyone else’s messages in round 2! Therefore, in round 4 of our FTTG scheme, P2 and P3 can compute their round two messages based on the transcript so far and just send them to Rc . This will enable Px to compute the output of protocol p.

[0070] While the above FTTG scheme is correct, unfortunately, it is insecure. Note that in order to rely on the security of protocol p , we crucially need that for any honest party P;, every other honest party receives the same first round message on its behalf. Further, embodiments can also require that all honest parties receive the same messages on behalf of the adversary. In our case, since the communication is being controlled and directed by Px instead of a broadcast channel, this need not be true if Rc was corrupt and P2, P3 were honest. More specifically, one of the following two things could occur: (i) Px can forward an incorrect version of P3’s round one message of protocol p to P2 and vice versa. (ii)P! could send different copies of its own round 1 message of protocol p to both P2 and P3.

[0071] The first problem can be solved quite easily as follows: we simply enforce that P3 sends a signed copy of its round 1 message of protocol p which is also forwarded by Px to P2. Then, P2 accepts the message to be valid if the signature verifies. In the setup phase, we can distribute a signing key to P3 and a verification key to P2. Similarly, we can ensure that P2’s actual round 1 message of protocol p was forwarded by Rc to P3.

[0072] Tackling the second problem is a bit trickier. The idea is that instead of enforcing that Rc send the same round 1 message of protocol p to both parties, we will instead ensure that Px learns their round 2 messages of protocol p only if it did indeed send the same round 1 message of protocol p to both parties. We now describe how to implement this mechanism. Let us denote msg2 to be Pi’s round 1 message of protocol p sent to P2 and msg3 (possibly different from msg2) to be Pi’s round 1 message of protocol p sent to P3. In the setup phase, we distribute two keys k2, k3 of a pseudorandom function (PRF) to both P2, P3. Now, in round 4 of our FTTG scheme, P3 does the following: instead of sending its round 2 message of protocol 7G as is, it encrypts this message using a secret key encryption scheme where the key is PRF(/c3, msg3). Then, in round 4, along with its actual message, P2 also sends

PRF(/c3, msg2) which would be the correct key used by P3 to encrypt its round 2 message of protocol 7G only if msg2 = msg3. Similarly, we use the key k2 to ensure that P2’s round 2 message of protocol p is revealed to Rc only if msg2 = msg3.

[0073] The above approach naturally extends for arbitrary n, k. by sharing two PRF keys between every pair of parties. Then, each party encrypts its round 2 message of protocol p with a secret key that is an XOR of all the PRF evaluations. We refer the reader to Section VII for more details.

B. Cosine Similarity

[0074] We now describe the techniques used in our four round efficient FTTG protocol for t = 3 for the Cosine Similarity distance measure. Our protocol is secure against a malicious adversary that can corrupt at most 1 party. Our construction is very similar for the very related Euclidean Distance function but we focus on Cosine Similarity in this section. Recall that for two vectors u, w, CS. Distfu, vecw ) = N Juu| H,w|w| | where I Ixl I denotes the L2-norm of the vector. First, we are going to assume that the distribution W samples vectors w such that 11 w| | = 1. Then, instead of checking that CS. Dist(u, vecw ) > d , we are going to check that (u, w) > ( d · (u, u))2. This is just a syntactic change that allows us to construct more efficient protocols.

[0075] Our starting point is the following. Suppose we had t = 2. Then, embodiments can use Yao’s [12] two party semi-honest secure computation protocol to build a two round FTTG scheme. In the registration phase, we secret share w into two parts w1, w2 and give one part to each party. The initiator requests for labels (via oblivious transfer) corresponding to his share of w and to his input u and the garbled circuit, which has the other share of w hardwired into it, reconstructs w, checks that (u, w) > ( d · (u, u))2 and if so, outputs a signature. In this protocol, we would have security against a malicious initiator who only has to evaluate the garbled circuit, if we use an oblivious transfer protocol that is malicious secure in the CRS model. However, to achieve malicious security against the garbler, we would need expensive zero knowledge arguments. Now, in order to build an efficient protocol that achieves security against a malicious garbler and to actually make the protocol work for threshold t = 3, the idea is to distribute the garbling process between two parties. That is, consider an initiator Px interacting with parties P2, P3. Now both P2 and P3 can generate one garbled circuit each using a shared randomness generated during the setup phase and the evaluator can just check if the two circuits are identical. In the registration phase, both P2 and P3 get the share w2 and a share of the signing key. Note that since the adversary can corrupt at most one party, this check would guarantee that the evaluator can learn whether the garbled circuit was honestly generated. In order to ensure that the evaluator does not evaluate both garbled circuits on different inputs, the garbled circuits can check that P s OT receiver queries made to both parties were the same. While this directly gives a two round FTTG scheme that works for threshold t = 3 and is secure against a malicious adversary that can corrupt at most one party, the resulting protocol is inefficient. Notice that

in order to work for the Cosine Similarity distance measure, the garbled circuit will have to perform a lot of expensive operations - for vectors of length £, we would have to perform 0(f) multiplications inside the garbled circuit. Our goal is to build an efficient protocol that performs only a constant number of operations inside the garbled circuit.

[0076] Our strategy to build an efficient protocol is to use additional rounds of communication to offload the heavy computation outside the garbled circuit. In particular, if we can first do the inner product computation outside the garbled circuit in the first phase of the protocol, then the resulting garbled circuit in the second phase would have to perform only a constant number of operations. In order to do so, we leverage the tool of efficient additively homomorphic encryption schemes [14, 15] In our new protocol, in round 1, the initiator Px will send an encryption of u. Px can compute (u, nc) by itself. Both P2 and P3 respond with encryptions of (u, w2) computed homomorphically using the same shared randomness. Then, Px can decrypt this ciphertext and compute (u, w). The parties can then run the garbled circuit based protocol as above in rounds 3 and 4 of our FTTG scheme: that is, Px requests for labels corresponding to (u, w) and (u, u) and the garbled circuit does the rest of the check as before. While this protocol is correct and efficient, there are still several issues.

[0077] The first problem is that the inner product (u, w) is currently leaked to the initiator Px thereby violating the privacy of the template w. To prevent this, we need to design a mechanism where no party learns the inner product entirely in the clear and yet the check happens inside the garbled circuit. A natural approach is for P2 and P3 to homomorphically compute an encryption of the result (u, w2) using a very efficient secret key encryption scheme. In our case, a one-time pad may suffice. Now, Px only learns an encryption of this value and hence the inner product is hidden, while the garbled circuit, with the secret key hardwired into it, can easily decrypt the one-time pad.

[0078] The second major challenge is to ensure that the input on which Px wishes to evaluate the garbled circuit is indeed the output of the decryption. If not, Px could request to evaluate the garbled circuit on suitably high inputs of his choice, thereby violating unforgeability. In order to prevent this attack, P2 and P3 homomorphically compute not just x = (u, w2> but also a message authentication code (MAC) y on the value x using shared randomness generated in the setup phase. We use a simple one time MAC that can be computed using linear operations and hence can be done using the additively homomorphic

encryption scheme. Now, the garbled circuit also checks that the MAC verifies correctly and from the security of the MAC, Rc can not change the input between the two stages. Also, Px can send encryptions of (u, u) in round 1 so that P2, P3 can compute a MAC on this as well, thereby preventing Px from cheating on this part of the computation too.

[0079] Another important issue to tackle is that we need to ensure that Px does indeed send valid well-formed encryptions. In order to do so, we rely on efficient zero knowledge arguments from literature. We observe that in our final protocol, the garbled circuit does only a constant number of operations and the protocol is extremely efficient.

[0080] In order to further optimize the concrete efficiency of our protocol, as done by Mohassel et al. [13], one or both of the two parties P2, P3 can send the entire garbled circuit. The other party can just send a hash of the garbled circuit and Px can just check that the hash values are equal.

III. PRELIMINARIES

A. Notation

[0081] The notation [/: x] can denote that the value x is private to party j . For a protocol p, we write [j: z '] <- p([ί: (x,y)], [y: z], c ) to denote that party i has two private inputs x and y; party j has one private input z; all the other parties have no private input; c is a common public input; and, after the execution, only j receives an output z'. xs denotes that each party i e S has a private value xt .

B. Basic Primitives

[0082] We refer the reader to [18] for the definition of threshold linear secret sharing schemes, secret key encryption, oblivious transfer, garbled circuits, non-interactive zero knowledge arguments, digital signatures, collision resistant hash functions and pseudorandom functions. We refer to [14] for a definition of additively homomorphic encryption schemes and to [19] for a definition of circuit privacy. We refer to [18] for the definition of secure multi-party computation. We refer to Appendix A for a definition of threshold oblivious pseudorandom functions and robust secret sharing.

C. Distance Measures

[0083] First, let’s recall that the L2 norm of a vector x = (x1 ... , xn) is defined to be | |x| | =


We now define the various distance measures that we use in embodiments of the invention. This list is not limiting, as any suitable distance measure may be used.

[0084] Definition 1 (Hamming distance) For any two vectors u = ( , , ¾), w =

(w1 ... , wf) e Z q, the Hamming Distance between them is defined to be the number of positions j at which Uj ¹ Wj .

[0085] Hamming distance counts the number of points in which the measurement and template vectors differ. Two vectors of length l each can be said to be close if their

Hamming Distance is at most (l— d) - that is, they are equal on at least d positions.

[0086] Definition 2 (Cosine similarity) For any two vectors u, w E q, the Cosine

Similarity between them is defined as follows:

CS. Dist(


[0087] Cosine similarity uses the inner product of two vectors to determine the cosine of the angle between the vectors. Smaller angles correspond to more similar vectors. Thus if the angle is small, the cosine of the angle is greater, and if the cosine distance is greater than an established threshold, then the vectors are said to match.

[0088] Definition 3 (Euclidean Distance) For any two vectors u, w E Zq, the Euclidean Distance between them is defined as follows:

EC. Dist(u, w) = (<u, u) + (w, w)— 2 · CS. Dist(u, w))

[0089] Euclidean distance measures the distance between the endpoints of two vectors as points in space. Two vectors that have a small Euclidean distance are closer together. Thus if the Euclidean distance is below an established threshold, then the vectors are said to match.

D. Threshold Signature

[0090] We now formally define a Threshold Signature Generation Scheme [16] and the notion of unforgeability.

[0091] Definition 4 (Threshold Signature) Let n, t e N. A threshold signature scheme TS is a tuple of four algorithms (Gen, Sign, Comb, Ver) that satisfy the correctness condition below.

• Gen(l K, n, t) ® (pp, vk, skn). This is a randomized key -generation algorithm that takes n, t and the security parameter k as input, and generates a signature verification-key vk, a shared signing-key skn and public parameters pp. (pp is an implicit input to all algorithms below.)

• Sign(ski, m) = : oy This is a deterministic signing algorithm that takes a signing key-share sk^ as input along with a message and outputs a partial signature oy

• Comb({a ie5) =: s/l. This is a deterministic algorithm that takes a set of partial signatures {sk ie5 and outputs a signature s or 1 denoting failure.

• Ver(vk, (m, a)) =: 1/0. This is a deterministic signature verification algorithm that takes a verification key vk and a candidate message-signature pair (m, a) as input, and returns a decision bit (1 for valid signature and 0 otherwise).

[0092] For all K E M, any t, n E M such that t < n, all (pp, vk, skn ) generated by

Gen(lK, n, t), any message m, and any set S £ [n] of size at least t, if
= Sign(ski, m) for t e 5, then Ver(vk, (m, Comb({a ie5))) = 1.

[0093] Definition 5 (Unforgeability) A threshold signatures scheme TS = (Gen, Sign, Comb, Ver) is unforgeable if for all n, t E N, t < n, and any PPT adversary <A, the following game outputs 1 with negligible probability (in security parameter).

• Initialize. Run (pp, vk, skn ) <- Gen(lK, n, t). Give pp, vk to c A. Receive the set of corrupt parties C c [n] of size at most t— 1 from <A. Then give skc to <A. Define y: = t— |C|. Initiate a list L: = 0.

• Signing queries. On query (m, i) for i £ [n]\C return
Sign(skj, m). Run this step as many times <A desires.

• Building the list. If the number of signing query of the form (m, i) is at least y, then insert m into the list L. (This captures that <A has enough information to compute a signature on m.)

• Output. Eventually receive output (m*, s*) from <A. Return 1 if and only if Ver(vk, (m*, s*)) = 1 and m* L, and 0 otherwise.

E. Specific Threshold Signature Schemes

[0094] We now describe the threshold signature schemes of Boldyreva [16] based on Gap-DDH assumption and Shoup[l7] based on RSA assumption. We will use these schemes in Section IX .

1. Scheme of Boldyreva

[0095] Let <G = (g) be a multiplicative cyclic group of prime order p that supports pairing and in which CDH is hard. In particular, there is an efficient algorithm

VerDDH(pa, gb, gc, g ) that returns 1 if and only if c = ab mod p for any a, b, c e TLp and 0 otherwise. Let H: {0,1}* ® G be a hash function modeled as a random oracle. Let Share be the Shamir’s secret sharing scheme.

[0096] The threshold signature scheme is as follows:

• Setup(l l, h, ΐ) ® ([[sk]], vk, pp). Sample s 1p and get (s^, ... , sn) <-Share(n, t, p, (0, s)). Set pp: = ( p, g , G), skp = S; and vk: = gs. Give (sk;, pp) to party i.

• PartEval(ski, x) ® yt. Compute w: = H(x), /ip = wski and output h

• Combine({i, yi}ie5) =: Token/1. If |5| < t output 1. Otherwise parse yt as ht for i e S

, i-r 7 -t; cmodp

and output Pίe5 hL’

• Verify(vk, x, Token) =: 1/0. Return 1 if and only if VerDDH(H(x), vk, Token, g) = 1.

2. Scheme of Shoup

[0097] Let Share be Shamir’s secret sharing scheme and H: {0,1}* ® 1N* be a hash function modeled as a random oracle.

[0098] The threshold signature scheme is as follows:

• Setup^, n, t) ® ([[sk]], vk, pp). Let p', q' be two randomly chosen large primes of equal length and set p = 2 p' + 1 and q = 2q' + 1. Set N = pq. Choose another large prime e at random and compute d º e_1mod (/V) where F(·): N ® N is the Euler’s totient function. Then (d, dlt ... , dn) <- Share(n, t, F(L/), (0, d )). Let sk^ = dt and vk = (iV, e). Set pp = D where D = n!. Give (pp, vk, sk;) to party i.

• PartEval(ski, x) ® yt. Output yp = H(x)2Adi.

• Combine({i, y ie5) =: Token/1. If |S| < t output 1, otherwise compute z =

Flies y^ ^rnod/V where A'i s = Lί 5D e Ί. Find integer (a, b) by Extended Euclidean GCD algorithm such that 4D2a + eb = 1. Then compute Token = za · \\(x)b mod N . Output Token.

• Verify(vk, x, Token) = 1/0. Return 1 if and only if Tokene = H(x) mod N.

F. Zero Knowledge Argument of Knowledge for Additively Homomorphic

Encryption

[0099] In this section, we list a couple of NP languages with respect to any additively homomorphic encryption scheme in which we use efficient non-interactive zero knowledge argument of knowledge systems.

[0100] First, let (AHE. Setup, AHE. Enc, AHE. Add, AHE. ConstMul, AHE. Dec) be the algorithms of an additively homomorphic encryption scheme. Let pk denote a public key sampled by running the setup algorithm AHE. Setup^). Let M denote the message space for the encryption scheme.

1. NP Languages

[0101] The first NP language is to prove knowledge of a plaintext in a given ciphertext. The second one proves that given three ciphertexts where the first two are well-formed and the prover knows the corresponding plaintexts and randomness used for encrypting them, the third ciphertext was generated by running the algorithm AHE. ConstMul(-). That is, it proves that the third ciphertext contains an encryption of the product of the messages encrypted in the first two ciphertexts.

[0102] NP language Lx characterized by the following relation Rx .

Statement : st = (ct, pk)

Witness : wit = (x, r)

Rx (st, wit) = 1 if and only if :

- ct = AHE. Enc(pk, x; r) .

[0103] NP language L2 characterized by the following relation R2.

Let ct-L = AHE. Enc(pk, x1; r- , ct2 = AHE. Enc(pk, x2; r2).

Statement : st = (ct1; ct2, ct3, pk)

Witness : wit = (c1; r^ x^ r2, r3)

Rx (st, wit) = 1 if and only if :

- ct3 = AHE. ConstMul(pk, ct^ x2; r3).

2. Paillier Encryption Scheme.

[0104] With respect to the additively homomorphic encryption scheme of Paillier [14], we have efficient NIZK arguments for the above two languages. Formally, we have the following imported theorem:

[0105] Imported Theorem 1 [20] Assuming the hardness of the Nth Residuosity assumption, there exists a non-interactive zero knowledge argument of knowledge for the above two languages in the Random Oracle model.

[0106] The above zero knowledge arguments are very efficient and only use a constant number of group operations on behalf of both the prover and verifier.

IV. FUZZY THRESHOLD SIGNATURE GENERATION

[0107] We introduce the notion of fuzzy threshold token generation ( FTTG ). An FTTG scheme is defined with respect to a function Dist which computes distance between two vectors, say from the space of £ -dimensional vectors over TLq . In the registration phase, key shares of a threshold signature scheme are generated and a template w is chosen according to a distribution W over 1q . Then the shares of the key and template are distributed among the n parties. A separate set-up algorithm generates some common parameters and some secret information for each party. After the one-time set-up has been completed, any one of the n parties can initiate a sign-on session with a new measurement u. If at least t parties participate in the session and u is close to the distributed template w, w.r.t. to the measure Dist, then the initiating party obtains a valid signature.

[0108] Definition 6 (Fuzzy Threshold Token Generation). Let n, t E M and W be a probability distribution over vectors in lq for some q, £ E N. Let TS =

(Gen, Sign, Comb, Ver) be a threshold signature scheme. An FTTG scheme for distance measure Dist: 1q X 1q ® N with threshold d E TL is given by a tuple

(Registration, Setup, SignOn, Verify) that satisfies the correctness property stated below. • Registration(lK, n, t, TS, q, £, W, cl) ® (sk, W[n], pp, vk) : On input the parameters, this algorithm first runs the key-generation of the threshold signature scheme,

(sk[nj, pp, vk) <- Gen(lK, n, t). Then it chooses a random sample w <- W. At the end, every party i receives (sk^, wi pp, vk), where
is a share of w. (We will implicitly assume that all protocol s/algorithms below take pp as input.)

• Setup() ® (pPsetupssn) Setup is an algorithm that outputs some common parameters ppsetup ar|d some secret information sL for each party. (ppsetup will also be an implicit input in the algorithms below.)

• SignOn((sk, w)5, [/: (m, u,S)]) ® ([/: t/±], [5: ( m,j,S )]) : SignOn is a distributed protocol through which a party j with an input u obtains a (private) token t (or 1, denoting failure) on a message m with the help of parties in a set S. Each party i E S uses their private inputs (ski W;) in the protocol and outputs ( m,j,S ). Party j additionally outputs t/l. Further, in this protocol, Party j can communicate with every party in the set S but the other parties in S can not interact directly with each other.

• Verify(vk, m, t) ® {0,1} : Verify is an algorithm which takes input the verification- key vk, a message m and a token t, runs the verification algorithm of the threshold signature scheme b: = Ver(vk, (m, t)), and outputs b.

[0109] Correctness. For all K E M, any n, t E N such that t < n, any threshold signature scheme TS, any q, £ E N, any probability distribution W over 1q, any distance d E TLq, any measurement u e 1q, any m, any S £ [n] such that |S| = t, and any j E [n], if

(sk, w[n], pp, vk) <- Registration(lK, n, t, TS, q, f, W, d), (ppsetup ¾, - , sn) <- Setup(), and ([/: out], [5: ( m,j,S )]) <- SignOn((sk, w)[n], [/: (m, u, S)]), then Verify(vk, m, out) = 1 if Dist(w, u) > d.

[0110] For an FTTG scheme, one could consider two natural security considerations. The first one is the privacy of biometric information. A template is sampled and distributed in the registration phase. Clearly, no subset of t— 1 parties should get any information about the template from their shares. Then, whenever a party performs a new measurement and takes the help of other parties to generate a signature, none of the participants should get any information about the measurement, not even how close it was to the template. We allow the

participants to learn the message that was signed, the identity of the initiating party, and set of all participants. The second natural security consideration is unforgeability. Even if the underlying threshold signature scheme is unforgeable, it may still be possible to generate a signature without having a close enough measurement. An unforgeable FTTG scheme should not allow this.

[0111] We propose a unified real-ideal style definition to capture both considerations. In the real world, sessions of sign-on protocol are run between adversary and honest parties whereas in the ideal world, they talk to the functionality TOiFuz . Both the worlds are initialized with the help of n, t, an unforgeable threshold signature scheme, parameters q, £ for the biometric space, a threshold for successful match d , a distribution W, and a sequence U: = (rq, u2, ... , uh) of measurements for honest parties. The indistinguishability condition that we will define below must hold for all values of these inputs. In particular, it should hold irrespective of the threshold signature scheme used for initialization, as long as it is unforgeable. The distribution W over the biometric space could also be arbitrary.

[0112] In the initialization phase, Registration is run in both real and ideal worlds to generate shares of a signing key and a template (chosen as per W). In the real world, Setup is also run. The public output of both Registration and Setup is given to the adversary <A. It outputs a set of parties C to corrupt along with a sequence ((
..., (th¾,7¾,5¾)), which will later be used to initiate sign-on sessions from honest parties (together with (¾, ... , ¾)). The secret shares of corrupt parties are given to A and the rest of shares are given to appropriate honest parties. On the other hand, in the ideal world, S is allowed to pick the output of Setup. We will exploit this later to produce a simulated common reference string. Note, however, that the output of Setup (whether honest or simulated) will be part of the final distribution.

[0113] Evaluation phase in the real world can be one of two types. Either a corrupt party can initiate a sign-on session or L can ask an honest party to initiate a session using the inputs chosen before. In the ideal world, S talks to TOiFuz to run sign-on sessions. Again, there are two options. If S sends (SignOn-Corrupt, m, u,j, S) to the functionality (where j is corrupt), then it can receive signature shares of honest parties in S on the message m but only if u is close enough to w. When S sends (SignOn-Honest, sid, t), TDiFuz waits to see if S wants to finish the session or not. If it does, then TDiFuz computes a signature and sends it to the initiating (honest) party.

[0114] We say that an FTTG scheme is secure if the joint distribution of the view of the real world adversary and the outputs of honest parties is computationally indistinguishable from the joint distribution of the view of the ideal world adversary and messages honest parties get from the functionality.

[0115] There are several important things to note about the definition. It allows an adversary to choose which parties to corrupt based on the public parameters. It also allows the adversary to run sign-on sessions with arbitrary measurements. This can help it to generate signatures if some measurements turn out to be close enough. Even if none of them do, it can still gradually learn the template. Our definition does not allow inputs for sessions initiated by honest parties to be chosen adaptively during the evaluation phase. Thus, the definition is a standalone definition and not a (universally) composable one. This type of restriction helps us to design more efficient protocols, and in some cases without any trusted setup.

[0116] Definition 7 A fuzzy threshold token generation scheme FG =

(Registration, Setup, SignOn, Verify) is secure if for any n, t s.t. t £ n, any unforgeable threshold signature scheme TS, any q, £ E N, any distance d E 1q, and any PPT adversary <A, there exists a PPT simulator S such that for any probability distribution W over lq and any sequence U: = (¾,¾, ... , ¾) of measurements (where h = poly(ic) and ¾ e 1q),

(yiewji, {Out - Reali}ie[n]\C )) * ( Views, {Out - Idealt }iE[n]\c))> where View^ and Views are the views of A and S in the real and ideal worlds respectively, Out— Realt is the concatenated output of (honest) party i in the real world from all the SignOn sessions it participates in (plus the parameters given to it during initialization), and Out— Ideals is the concatenation of all the messages that party i gets from the functionality ^DiFuz in the ideal world, as depicted in FIG. 5.

V. ANY DISTANCE MEASURE

[0117] In this section, we show how to construct a four round secure fuzzy threshold token generation protocol using any two round malicious secure MPC protocol using a broadcast channel as the main technical tool. Our token generation protocol satisfies Definition 7 for any n, t and works for any distance measure. Formally, we show the following theorem:

[0118] Theorem 1 Assuming the existence of threshold signatures, threshold secret sharing, two round UC-secure MPC protocols in the CRS model that is secure in the presence of a broadcast channel against malicious adversaries that can corrupt up to (t— 1) parties, secret key encryption, and pseudorandom functions and strongly unforgeable signatures, there exists a four round secure fuzzy threshold token generation protocol Definition 7 for any n, t and any distance measure.

[0119] Note that such a two round MPC protocol can be built assuming

DDH/LWE/QR/iVtft Residuosity [5, 6, 7, 8, 9, 11] All the other primitives can be based on the existence of injective one way functions.

[0120] Instantiating the primitives used in the above theorem, we get the following corollary:

[0121] Corollary 2 Assuming the existence of injective one way functions and A e

{DDH/LWE/QR/Nth Residuosity), there exists a four round secure fuzzy threshold token generation protocol satisfying Definition 7 for any n, t, any distance measure Dist and any threshold d.

A. Construction

[0122] We first list some notation and the primitives used before describing our construction.

[0123] Let Dist denote the distance function and d denote the threshold distance value to denote a match. Let the n parties be denoted by P1 ... , Pn respectively. Let l denote the security parameter. Let W denote the distribution from which the random vector is sampled. Let’s assume the vectors are of length £ where £ is a polynomial in l. Each element of this vector is an element of a field F over some large prime modulus q.

[0124] Let TS = (TS. Gen, TS. Sign, TS. Combine, TS. Verify) be a threshold signature scheme. Let (SKE. Enc, SKE. Dec) denote a secret key encryption scheme. Let

(Share, Recon) be a (t, n) threshold secret sharing scheme. Let (Gen, Sign, Verify) be a strongly-unforgeable digital signature scheme. Let PRF denote a pseudorandom function.

[0125] Let 7G be a two round UC-secure MPC protocol in the CRS model in the presence of a broadcast channel that is secure against a malicious adversary that can corrupt upto (t— 1) parties. Let p. Setup denote the algorithm used to generate the CRS. Let

( p . Rounds p. Round2) denote the algorithms used by any party to compute the messages in each of the two rounds and p. Out denote the algorithm to compute the final output. Further, let 7G. Sim use algorithm ( p . Sim!, p. Sim2) to compute the first and second round messages respectively. Note that since we consider a rushing adversary, the algorithm p. Sim1(·) does not require the adversary’s input or output. Let p. Ext denote the extractor, that, on input the adversary’s round one messages, extracts its inputs. Let p. Sim. Setup denote the algorithm used by p. Sim to compute the simulated CRS.

[0126] We now describe the construction of our four round secure fuzzy threshold token generation protocol 7TAny for any n and k.

1. Registration

[0127] In the registration phase, the following algorithm is executed by a trusted authority. The trusted authority may be a device that will participate in subsequent biometric matching and signature generation. If the trusted authority will potentially participate in biometric matching and signature generation, it can delete the information that it should not have at the end of the registration phase. That is, it should delete the shares corresponding to the other devices.

[0128] The trusted authority can sample a random vector w from the distribution W of biometric data and save it as the biometric template. The trusted authority can then compute the appropriate shares of the template (w^ ... , wn), depending on the key sharing algorithm being used, the number of devices and the threshold, and the security parameter using a share generation algorithm (riq, ... , wn) <- Share(l;l, w, n, t). The public key vkTS, secret key shares sk s, ... , sk s, and other relevant parameters ppTS for the threshold signature scheme can be generated with the threshold generation algorithm (ppTS, vkTS, sk s, ... , sk s) <-TS. Gen(l;l, n, t). Then the trusted authority can send each device the relevant information (e.g., the template share, secret key share, public key, and other threshold signature parameters). For example, the 1th device (party P^) can receive (wi; ppTS, vkTS, skjs).

2. Setup

[0129] Set up can also be done by a trusted authority. The trusted authority may be the same trusted device that completed the registration. If the trusted authority will potentially participate in biometric matching and signature generation, it can delete the information that it should not have at the end of the registration phase. That is, it should delete the shares corresponding to the other devices.

[0130] - Generate crs <- p. Setup (1l).

- For each i e [n], compute (ski vk;) <- Gen(l;i).

- For every i,j E [n], compute (kFRF, kFtRF) as uniformly random strings.

- For each i E [n], give (crs, ski {vk;};e[n], (kRRF, kRRF} n]) to party P;.

[0131] First, the trusted authority can generate a common reference string crs for the multiparty computation scheme being used using a setup algorithm crs <- p. Setup(l;l). Generate shares of a secret key and public key for authenticating the computations. Compute shares of relevant parameters and secrets for a distributed pseudorandom function. Then each device is sent the relevant information (e.g., the crs and key shares).

3. SignOn

[0132] In the SignOn phase, let’s consider party P* that uses input vector u, a message m on which it wants a token. P* interacts with the other parties in the below four round protocol. The arrowhead in Round 1 can denote that in this round messages are outgoing from party P*.

- Round 1: (P* ®) Party P* does the following:

i. Pick a set S consisting of t parties amongst P1 Pn. For simplicity, without loss of generality, we assume that P* is also part of set S.

ii. To each party
send (m, ^).

- Round 2: (® P*) Each Party P; e (except P*) does the following: i. Participate in an execution of protocol p with the parties in set using input Yi = (w;, sk s) and randomness r t to compute the circuit C defined in FIG. 6. That is, compute the first round message msg! j <- p. Round! (y^; rj).

ii. Compute t = Sign(skj, msg c ί) using some randomness. iii. Send (msg^, s^) to party P*.

- Round 3: (P* ® ) Party P* does the following:

i. Let TransDiFuz denote the set of messages received in round 2.

ii. Participate in an execution of protocol p with the parties in set S using input y* = (w*, skjs, u, m) and randomness r* to compute the circuit C defined in FIG. 6. That is, compute the first round message msg-L * <- p. Round1(y*; r*).

iii. To each party
E S , send (TransDiFuz, msg-L *).

- Round 4: (® P*) Each Party P; e (except P*) does the following: i. Let TransDiFuz consist of a set of messages of the form (msg1 ;·, s± ;), Vj E S\ P*. Output 1 if Verify(vk, msg1 ;·, s1 ;·) ¹ 1.

ii. Let t1 denote the transcript of protocol p after round 1. That is, t1 =

(msg 1 j}j § .

iii. Compute the second round message msg2 i <- p. Round2(yi, T1; rj). iv. Let (TransDiFuz, msg! *) denote the message received from P* in round 3. Compute

v. Compute ct^ = SKE. Enc(eki, msg2 i).

vi. For each party P ) e .S', compute ek; i = PRF(k F, msg! *). vii. Send (cti; {ek; i};e5) to P*.

- Output Computation: Every party
E S outputs (m, P*, S). Additionally, party P* does the following to generate a token:

i. For each party P E S , do the following:

• Compute ekj =0;e5 ek;· i .

• Compute msg2 j = SKE. Dec(ek, ct).

ii. Let t2 denote the transcript of protocol p after round 2.

iii. Compute the output of p as {Token ie5 <- p. Out(y*, T2; r*). iv. Compute Token <- TS. Combine({Token ie5).

v. Output Token if TS. Verify(vkTS, m, Token). Else, output 1.

4. Token Verification

[0133] Given a verification key vkTS, message m and token Token, the token verification algorithm outputs 1 if TS. Verify (vkTS, m, Token) outputs 1.

[0134] The correctness of the protocol directly follows from the correctness of the underlying primitives.

B. Security Proof

[0135] In this section, we formally prove Theorem 1.

[0136] Consider an adversary L who corrupts t* parties where t* < t. The strategy of the simulator Sim for our protocol 7rAny against a malicious adversary A is described below. Note that the registration phase first takes place at the end of which the simulator gets the values to be sent to every corrupt party which it then forwards to A.

1. Description of Simulator

[0137] Setup: Sim does the following:

(a) Generate crssim <- p. Sim. Setup (1l).

(b) For each i e [n], compute (ski vk;) <- Gen(l;i).

(c) For each i,j E [n], compute (kFRF, kFRF) as uniformly random strings.

(d) For each i E [n], if P; is corrupt, give (crs, ski {vk;};e[n], {kFRF, kFRF}ye[n]) to the adversary <A.

[0138] SignOn Phase: Case 1 - Honest Party as P*

[0139] Suppose an honest party P* uses an input vector u and a message m for which it wants a token by interacting with a set of parties S. The arrowhead in Round 1 can denote that in this round messages are outgoing from the simulator. Sim gets the tuple (m, S) from the ideal functionality yDjFuz and interacts with the adversary <A as below:

• Round 1: (Sim ®) Sim sends (m, S) to the adversary L for each corrupt party

P i E S.

• Round 2: (® Sim) On behalf of each corrupt party
E S , receive (msg^, s^i) from the adversary.

• Round 3: (Sim ® ) Sim does the following:

(a) On behalf of each honest party F in <S\P*, compute msgl 7· <- p. Sim1(l;l, F )) and a1 ;- = Sign(sk, msg1 ;).

(b) Let TransDiFuz denote the set of tuples of the form (msg^, s^i) received in round 2 and computed in the above step.

(c) Compute the simulated first round message of protocol p on behalf of honest party P* as follows: msgx * <- n. Sim1(l;i, P*).

(d) Send (TransDiFuz, msg-L *) to the adversary for each corrupt party P; e .S'.

• Round 4: (® Sim) On behalf of each corrupt party
E S , receive

(ct i, {ek j'ijjes) from the adversary.

• Message to Ideal Functionality TDiFuz: Sim does the following:

(a) Run p. Sim(·) on the transcript of the underlying protocol p.

(b) If 7G. Sim(·) decides to instmct the ideal functionality of p to deliver output to the honest party P* in protocol p , then so does Sim to the functionality yDiFuz in our distributed fuzzy secure authentication protocol. Note that in order to do so, p. Sim(·) might internally use the algorithm p. Ext(·). Essentially, this step guarantees Sim that the adversary behaved honestly in the protocol.

(c) Else, Sim outputs 1.

[0140] SignOn Phase: Case 2 - Malicious Party as P*

[0141] Suppose a malicious party is the initiator P*. Sim interacts with the adversary L as below:

• Round 1: (® Sim) Sim receives (m, S ) from the adversary <A on behalf of each honest party P,· .

• Round 2: (Sim ® ) Sim does the following:

(a) On behalf of each honest party F in S , compute and send the pair msg1 ;· <-7G. Sim1(l;i, P;) and s1 ;· = Sign(sk, msg1 ;) to the adversary.

• Round 3: (® Sim) Sim receives a tuple (TransDiFuz, msg-L *) from the adversary A on behalf of each honest party P,·.

• Round 4: (Sim ®) Sim does the following:

(a) On behalf of each honest party F , do the following:

i. Let TransDiFuz consist of a set of messages of the form (msg^, s± ;), Vi E S\ P*. Output 1 ifVerify(vki, msg! i, sc ί) ¹ 1.

ii. Let t1 denote the transcript of protocol p after round 1. That is, t1 =

(b) Let t2 denote the subset of tί corresponding to all the messages generated by honest parties.

(c) If t2 not equal for all the honest parties, output“SpecialAbort".

(d) If msg! * not equal for all the honest parties, set a variable flag = 0.

(e) Query to Ideal Functionality ^ DiFuz :

i. Compute inp^ = p. Ecί(t1 crssim).

ii. Query the ideal functionality TOiFuz with inp^ to receive output out^.

(f) Compute the set of second round messages msg2 of protocol p on behalf of each honest party

(g) On behalf of each honest party P , do the following:

1. Let (TransDiFuz, msg-L *) denote the message received from the adversary in round 3. Compute ek =®ie5 PRF(k F, msgl H,).

ii. If flag = 0, compute ct = SKE. Enc(rand, olmsg2-rl) where rand is a string chosen uniformly at random.

iii. Else, compute ct = SKE. Enc(ek, msg2 ;).

iv. For each party P έ e .S', compute eki ;· = PRF(k™F, msg-L *). v. Send (ct j, {eki ;}ie5) to the adversary.

2. Hybrids

[0142] We now show that the above simulation strategy is successful against all malicious PPT adversaries. That is, the view of the adversary along with the output of the honest parties is computationally indistinguishable in the real and ideal worlds. We will show this via a series of computationally indistinguishable hybrids where the first hybrid Hyb0 corresponds to the real world and the last hybrid Hyb4 corresponds to the ideal world.

[0143] 1 Hyb0 - Real World: In this hybrid, consider a simulator SimHyb that plays the role of the honest parties as in the real world.

[0144] 2. Hybj - Special Abort: In this hybrid, SimHyb outputs“SpecialAbort" as done by Sim in round 4 of Case 2 of the simulation strategy. That is, SimHyb outputs

“SpecialAbort" if all the signatures verify but the adversary does not send the same transcript of the first round of protocol p to all the honest parties.

[0145] 3. Hyb2 - Simulate MPC messages: In this hybrid, SimHyb does the following:

- In the setup phase, compute the CRS as crssim <- p. Sim. Setup^).

- Case 1: Suppose an honest party plays the role of P*, do:

* In round 3, compute the first round messages msg1 ;· of protocol p on behalf of every honest party P;- e S and the first round message msg-L * on behalf of the party P* by running the algorithm p. Sim1(·) as done in the ideal world.

* Then, instead of P* computing the output by itself using the protocol messages, instruct the ideal functionality to deliver output to P*. That is, execute the “message to ideal functionality” step exactly as in the ideal world.

- Case 2: Suppose a corrupt party plays the role of P*, do:

* In round 2, compute the first round messages msg1 ;· of protocol p on behalf of every honest party P;- e .S by running the algorithm p. Sim1(·) as done in the ideal world.

* Interact with the ideal functionality exactly as done by Sim in the ideal world. That is, query the ideal functionality on the output of the extractor p. Ext(·) on input ( , crssim) and receive output out^.

* Compute the set of second round messages msg2 of protocol p on behalf of each honest party

[0146] 4. Hyb3 - Switch DPRF Output in Case 2: In this hybrid, suppose a corrupt party plays the role of P*, SimHyb computes the value of the variable flag as done by the simulator Sim in round 4 of the simulation strategy. That is, SimHyb sets flag = 0 if the adversary did not send the same round 1 messages of protocol p to all the honest parties P,· e S. Then, on behalf of every honest party P , SimHyb does the following:

- If flag = 1, compute cty as in Hyb^

- If flag = 0, compute cty = SKE. Enc(rand, msg2 j) where rand is chosen uniformly at random and not as the output of the DPRF anymore.

[0147] 5. Hyb4 - Switch Ciphertext in Case 2: In this hybrid, suppose a corrupt party plays the role of P*, SimHyb does the following: if flag = 0, compute ct =

SKE. Enc(rand, ()
as in the ideal world. This hybrid corresponds to the ideal world.

[0148] We will now show that every pair of successive hybrids is computationally indistinguishable.

[0149] Lemma 1 Assuming the strong unforgeability of the signature scheme, HybQ is computationally indistinguishable from Hyb-j .

[0150] Proof. The only difference between the two hybrids is that in Hyb-L, SimHyb might output“SpecialAbort”. We now show that SimHyb outputs“SpecialAbort” in Hybx only with negligible probability.

[0151] Suppose not. That is, suppose there exists an adversary L that can cause SimHyb to output“SpecialAbort” in Hyb-L with non-negligible probability, then we will use <A to construct an adversary c/Zsign that breaks the strong unforgeability of the signature scheme which is a contradiction.

[0152] <Ap begins an execution of the DiFuz protocol interacting with the adversary <A as in Hyb^ For each honest party F, c/Zsign interacts with a challenger Csign and gets a verification key vk which is forwarded to A as part of the setup phase of DiFuz. Then, during the course of the protocol, c/Zsign forwards signature queries from A to risign and the responses from Csign to <A.

[0153] Finally, suppose A causes c/ gn to output“SpecialAbort" with non-negligible probability. Then, it must be the case that for some tuple of the form (msg1 ;·, s1 ;·) corresponding to an honest party P , the signature s1 ;· was not forwarded to <A from Csign but still verified successfully. Thus, c/ gn can output the same tuple (msg1 ;·, s1 ;·) as a forgery to break the strong unforgeability of the signature scheme with non-negligible probability which is a contradiction.

[0154] Lemma 2 Assuming the security of the MPC protocol TT, Hybj is computationally indistinguishable from Hyb2.

[0155] Proof. Suppose there exists an adversary A that can distinguish between the two hybrids with non-negligible probability. We will use A to construct an adversary Lp that breaks the security of the protocol p which is a contradiction.

[0156] <Ap begins an execution of the DiFuz protocol interacting with the adversary <A and an execution of protocol p for evaluating circuit C (FIG. 6) interacting with a challenger Cn. Now, suppose A corrupts a set of parties P, Ap corrupts the same set of parties in the protocol 7G. First, the registration phase of protocol DiFuz takes place. Then, Lp receives a string crs from the challenger Cn which is either honestly generated or simulated. <Ap sets this string to be the crs in the setup phase of the DiFuz protocol with A. The rest of the setup protocol is run exactly as in Hyb0.

[0157] Case 1: Honest party as P

[0158] Now, since we consider a rushing adversary for protocol p, on behalf of every honest party P,·, <Lp first receives a message msg from the challenger Cn. <Ap sets msg to be the message msg1 ;· in round 3 of its interaction with A and then computes the rest of its messages to be sent to A exactly as in Hyt^. Ap receives a set of messages corresponding to protocol 7G from A on behalf of the corrupt parties in P which it forwards to Cn as its own messages for protocol p.

[0159] Case 2: Corrupt party as P*

[0160] As in the previous case, on behalf of every honest party P , Ap first receives a message msg from the challenger Cn. <Ap sets msg to be the message msg1 ;· in round 2 of its interaction with <A. Then, in round 4, if the signatures verify, <Lp forwards the set of messages corresponding to protocol p received from A on behalf of the corrupt parties in P to Cn as its own messages for protocol p. Then, on behalf of every honest party P,·, Lp receives a message msg from the challenger Cn as the second round message of protocol p. Lp sets msg to be the message msg 2j in round 4 of its interaction with A and computes the rest of its messages to be sent to A exactly as in Hyt^.

[0161] Notice that when the challenger Cn sends honestly generated messages, the experiment between <Ap and <A corresponds exactly to Hyt^ and when the challenger Cn sends simulated messages, the experiment corresponds exactly to Hyb2. Thus, if L can distinguish between the two hybrids with non-negligible probability, Lp can use the same guess to break the security of the scheme p with non-negligible probability which is a contradiction.

[0162] Lemma 3 Assuming the security of the pseudorandom function, Hyb2 is

computationally indistinguishable from Hyb3.

[0163] Proof. Suppose there exists an adversary A that can distinguish between the two hybrids with non-negligible probability. We will use A to construct an adversary c/Z PR F that breaks the security of the pseudorandom function which is a contradiction.

[0164] The adversary c/ZP R F interacts with the adversary <A in an execution of the protocol DiFuz. For each honest party
c/ZPRF also interacts with a challenger CPRF in the PRF security game. For each j, CPRF sends the PRF keys corresponding to the set of corrupt

parties (< k ) as requested by c/ZPRF which is then forwarded to L during the setup phase. Then, c/ZPRF continues interacting with L up to round 3 as in Hyb4. Now, in round 4, suppose it computes the value of the variable flag to be 0 (as computed in Hyb3), then APR does the following: for each honest party P y, forward the message msgl , received in round 3. Then, set the XOR of the set of responses from CPRF to be the value eky used for generating the ciphertext ct .

[0165] Now notice that when the challenger CPRF responds with a set of honest PRF evaluations for each honest party j, the interaction between c/ZPRF and A exactly corresponds to Hyb2 and when the challenger responds with a set of uniformly random strings, the interaction between c/ZPRF and <A exactly corresponds to Hyb3. Thus, if <A can distinguish between the two hybrids with non-negligible probability, c/ZDPRF can use the same guess to break the pseudorandomness property of the PRF scheme with non-negligible probability which is a contradiction.

[0166] Lemma 4 Assuming the semantic security of the secret key encryption scheme,

Hyb3 is computationally indistinguishable from Hyb4.

[0167] Proof. Suppose there exists an adversary A that can distinguish between the two hybrids with non-negligible probability. We will use A to construct an adversary ASKE that breaks the semantic security of the encryption scheme which is a contradiction.

[0168] The adversary <ASKE interacts with the adversary <A as in Hyb3. Then, on behalf of every honest party Py, before sending its round 4 message, c/ZSKE first sends the tuple

(msg2j·, 0|msg2/l) to the challenger CSKE of the secret key encryption scheme. Corresponding to every honest party, it receives a ciphertext which is either an encryption of msg2 or 0lmsg2 /l using a secret key chosen uniformly at random. Then, <ASKE sets this ciphertext to be the value ct and continues interacting with the adversary A exactly as in Hyb2. Notice that when the challenger CSKE sends ciphertexts of msg2 y, the experiment between <ASKE and <A corresponds exactly to Hyb3 and when the challenger riSKE sends ciphertexts of ()lmsg2/l, the experiment corresponds exactly to Hyb4. Thus, if A can distinguish between the two hybrids with non-negligible probability, ASKE can use the same guess to break the semantic security of the encryption scheme with non-negligible probability which is a contradiction.

VI. ANY DISTANCE MEASURE USING THRESHOLD FHE

[0169] In this section, we show how to construct a fuzzy threshold token generation protocol for any distance measure using any FHE scheme with threshold decryption. Our token generation protocol satisfies the definition in Section III for any n, t and works for any distance measure. Formally, we show the following theorem:

[0170] Theorem 3. Assuming the existence of the following: FHE with threshold decryption, threshold signatures, secret key encryption, and strongly unforgeahle digital signatures, there exists a four round secure fuzzy threshold token generation protocol for any n, t and any distance measure.

A. Construction

[0171] We first list some notation and the primitives used before describing our construction.

[0172] Let Dist denote the distance function that takes as input a template w and a measurement u, and let d denote the threshold distance value to denote a match between w and u. Let C be a circuit that takes as input a template w, a measurement u and some string K, and outputs K if Dist(w, u) < d otherwise it outputs 0. Let the n parties be denoted by P 1 ... , Pn respectively. Let l denote the security parameter. Let W denote the distribution from which a random template vector is sampled. Let’s assume the vectors are of length £ where -f is a polynomial in l.

[0173] Let TFHE =

(TFHE. Gen, TFHE. Enc, TFHE. PartialDec, TFHE. Eval, TFHE. Combine) be a threshold FHE scheme. Let TS = (TS. Gen, TS. Sign, TS. Combine, TS. Verify) be a threshold signature scheme. Let SKE = (SKE, Gen, SKE. Enc, SKE. Dec) denote a secret key encryption scheme.

[0174] Let (Gen, Sign, Verify) be a strongly-unforgeable digital signature scheme. Let H be a collision-resistant hash function (e.g., modeled as a random oracle).

[0175] We now describe the construction of our four round secure fuzzy threshold token generation protocol -Any-TFHE for any n and k.

1. Registration

[0176] In the registration phase, the following algorithm can executed by a trusted authority:

• Sample a random w from the distribution W.

• Compute (pk, sk1 , skw) <- TFHE. Gen(l'1, n, t)

• Compute
Gen(l;l, n, t).

• Compute K <- SKE, Gen(lA)

• Compute the ciphertexts

ct0 <- TFHE. Enc(pk, w), ct-L <- TFHE. Enc(pk, K)

• For each i E [n], do the following:

(a) Compute sk'^ vk'i <- Gen(l'1)

(b) Compute Kt = H K, i ).

(c) Give the following to party

(pk, ski, cto, cti, (vk

[0177] A trusted authority (e.g., primary user device) can sample a biometric template w from a distribution of a biometric measurement W. The trusted authority can also compute a public key pk and a plurality of private key shares sk^ for a threshold fully homomorphic encryption scheme TFHE, in addition to public parameters ppTS, a verification key vkTS, and a plurality of private key shares skjs for a threshold signature scheme TS and a string K for a secret key encryption scheme SKE. Using the public key pk, the trusted authority can encrypt the biometric template w to form ciphertext ct0 and encrypt the string K to form ciphertext cU.

[0178] Then, the trusted authority can compute a plurality of values for each electronic device i of n electronic devices (e.g., a first electronic device and other electronic devices). The trusted authority can compute a secret key share sk'; and a verification key share vk^ for a digital signature scheme. The trusted authority can also compute a hash Kt using the string K and a hash function H. The trusted authority can then send to each electronic device P; the public key pk, the ciphertexts ct0 and cU, verification key shares (vk·, ...., vk·), secret key share sk' , public parameters ppTS, verification key vkTS, private key share sk s, and hash

2. SignOn Phase

[0179] In the SignOn phase, let’s consider party P* that uses input vector u, a message m on which it wants a token. P* interacts with the other parties in the below four round protocol. The arrowhead in Round 1 can denote that in this round messages are outgoing from party P*

• Round 1: (P* ®) Party P* does the following:

a) Compute the ciphertext ct*=TFHE.Enc(pk, u).

b) Pick a set S consisting of t parties amongst P1 ... , Pn. For simplicity, without loss of generality, we assume that P* is also part of set S.

c) To each party Px e 5, send (ct*,m).

• Round 2: (® P*) Each Party P t E S (except P*) does the following:

a) Compute the signature s' L = Sign(sk'i, ct*).

b) Send s to the party P*.

• Round 3: (P* ® ) Party P* sends
to each party P;.

• Round 4: (® P*) Each Party P t E S (except P*) does the following:

a) If there exists i e [n] such that Verify(vk'i, ct*, s ) ¹ 1, then output 1.

b) Otherwise, evaluate the ciphertext

ct = TFHE. Eval(pk, C, ct0, ct*, ct- ,

and compute a partial decryption of ct as:

mi = TFHE. PartialDec(skj, ct).

c) Compute TOKEN = TS. Sign(ski, m) and ¾ <- SKE. Enc^, TOKEN;).

d) Send (m;, ¾) to the party P*.

• Output Computation: Party P* does the following to generate a token:

a) Recover K = TFHE. Combine^, ... ,mh).

b) For each i E [n], do the following:

i. Compute Kt = W(K, i ).

ii. Recover Token t = SKE. Dec(K), ct;).

c) Compute Token <- TS. Combine({Token ieS).

d) Output Token if TS. Verify(vkTS, m, Token) = 1. Else, output 1.

[0180] A first electronic device, P* can encrypt the input vector u (e.g., the biometric measurement vector) with the public key pk of the threshold fully homomorphic encryption scheme to generate an encrypted biometric measurement ciphertext ct*. The first electronic device can send the encrypted biometric measurement ct* and the message m to each of the other electronic devices. Each of the other electronic devices can compute a partial signature computation s with the ciphertext ct* and the secret key share sk';. Each electronic can send the partial signature computation s to the first electronic device. The first electronic device can send all of the partial signature computations
, s'h ) to all of the other electronic devices.

[0181] Each of the other electronic devices can verify each of the partial signature

computations (s^, ... , s'h) with the ciphertext ct* and the received verification keys

(vk), ...., vk( . If any of the partial signature computations are not verified (e.g.,

Verify(vk'i, ct*,
¹ 1), the electronic device can output 1, indicating an error. An unverified signature can indicate that one (or more) of the electronic devices did not compute the partial signature computation correctly, and thus may be compromised or fraudulent.

After verifying the partial signature computations, each of the other electronic devices can evaluate the ciphertexts ct*, ct0 and ct! to generate a new ciphertext ct. Evaluating the ciphertexts may include evaluating a circuit C that computes a distance measure between the template w (in ciphertext ct0) and the measurement u (in ciphertext ct*). If the distance measure is less than a threshold d , the circuit C can output the string K (in ciphertext ct;). Each of the other electronic devices can then compute a partial decryption /i; of the ciphertext ct. A partial threshold signature token Token; can be generated by each of the other electronic devices using the secret key share sk; and the message m. A ciphertext ct; can be computed as a secret key encryption with the partial threshold signature token Token; and the hash Kt . The partial decryption /i; and the ciphertext ct; can then be send to the first electronic device.

[0182] The first electronic device can homomorphically combine the partial decryptions (m1 ... , mh) to recover the string K. Then the first electronic device can compute hash Kt for each electronic device t using the string K and a hash function H, then use the hash Kt to decrypt the secret key encryption ciphertext ct; to recover the partial threshold signature token Token;. With each received partial threshold signature token Token;, the first

electronic device can combine them to compute a signature token Token. If the first electronic device can verify the token Token and the message m, the first electronic device can output the token Token; otherwise, the first electronic device can output 1.

3. Token Verification

[0183] Given a verification key vk TS, message m and token Token, the token verification algorithm outputs 1 if TS. Verify (vkTS, m, Token) outputs 1.

[0184] Correctness: The correctness of the protocol directly follows from the correctness of the underlying primitives.

B. Security Proof

[0185] In this section, we formally prove Theorem 3.

[0186] Consider an adversary A , who corrupts t *, parties where t *< t. The strategy of the simulator Sim, for our protocol p Any_TFHE, against an adversary A , is sketched below.

1. Description of Simulator

[0187] Registration Phase: On receiving the first query of the form (“Register", sid) from a party Pj, the simulator Sim receives the message (“Register", sid, P^) from the ideal functionality FDiFuz, , which it forwards to the adversary A.

[0188] SignOn Phase: Case 1 - Honest Party as P*. Suppose that in some session with id sid, an honest party P* that uses an input vector u and a message m for which it wants a token by interacting with a set S consisting of t parties, some of which could be corrupt. Sim gets the tuple (m,5) from the ideal functionality FDiFuz and interacts with the adversary <A.

[0189] In the first round of the sign-on phase, ~ sends encryptions of 0 under the threshold FHE scheme to each malicious party in the set S Note that this is indistinguishable from the real world by the CPA security of the threshold FHE scheme. In the subsequent rounds, it receives messages from the adversary A on behalf of the corrupt parties on S and also sends messages to the corrupt parties in the set S.

[0190] Sim also issues a query of the form (“SignOn”, sid, msg, Pt T e [n]) to the ideal functionality Ffttg, and proceeds as follows:

- If the ideal functionality Ff g, responds with (“Sign”, sid, msg, Pt), then Sim chooses a signature s and responds to Ffxxg, with (“Signature”, msg, sid, P; <r), and sends ( s, msg, ), to the adversary A.

- On the other hand, if the ideal functionality Ffttg, responds with (“SignOn failed”, msg), then the simulator Sim aborts.

[0191] SignOn Phase: Case 2 - Malicious Party as P*: Suppose that in some session with id sid, a malicious party P* that uses an input vector u and a message m for which it wants a token by interacting with a set S consisting of t parties, some of which could be corrupt. Sim again gets the tuple (m, S ) from the ideal functionality FDiFuz, and interacts with the adversary A.

[0192] The simulator Sim receives the measurement u from the adversary A on behalf of the corrupt party P*, and issues a query of the form (“Test Password”, sid, u, ) to the ideal functionality Ffttg . It forwards the corresponding response from FFTTG,to the adversary A.

[0193] In the first round of the sign-on phase, ~ receives ciphertexts under the threshold FHE scheme from the adversary A. on behalf of each honest party in the set S. In the subsequent rounds, it sends messages to the adversary Aon behalf of the honest parties in S and also receives messages from the adversary A on behalf of the hones parties in S.

[0194] Sim also issues a query of the form (“SignOn", sid, msg, Pt T e [n]) to the ideal functionality FFTTG and proceeds as follows:

- If the ideal functionality Ffttg, responds with (“Sign”, sid, msg, Pt) then S1M chooses a signature s, responds toFFXXG,with (“Signature", msg, sid, PL a) and sends (s, msg). to the adversary A.

- On the other hand, if the ideal functionalityFF G, responds with (“SignOn failed”, msg), then the simulator S1M. aborts.

VII. COSINE SIMILARITY AND EUCLIDEAN DISTANCE

[0195] In this section, we show how to construct an efficient four round secure fuzzy threshold token generation protocol in the Random Oracle model for the Euclidean Distance and Cosine Similarity distance measure. Our token generation protocol satisfies Definition 7 for any n with threshold t = 3 and is secure against a malicious adversary that can corrupt any one party. We first focus on the Cosine Similarity distance measure. At the end of the section, we explain how to extend our result for Euclidean Distance as well.

[0196] Formally we show the following theorem:

[0197] Theorem 3 Assuming the existence of threshold signatures, threshold secret sharing, two message oblivious transfer in the CRS model that is secure against malicious adversaries, garbled circuits, a threshold secret sharing scheme, circuit-private additively homomorphic encryption, secret key encryption, and non-interactive zero knowledge arguments for NP languages L^, L2 defined in Section IV.E, there exists a four round secure fuzzy threshold token generation protocol satisfying Definition 7 with respect to the Cosine Similarity distance function. The protocol works for any n, for threshold t = 3, and is secure against a malicious adversary that can corrupt any one party.

[0198] We know how to build two message OT in the CRS model assuming

DDH/LWE/Quadratic Residuosity /Nth Residuosity assumption [21, 22, 23, 24] The Paillier encryption scheme [14] is an example of a Circuit-private additively homomorphic encryption from the Nth Residuosity assumption. As shown in Section IV.E, we can also build NIZK arguments for languages L1 L2 from the Nth Residuosity assumption in the Random Oracle model. The other primitives can either be built without any assumption or just make use of the existence of one way functions. Thus, instantiating the primitives used in the above theorem, we get the following corollary:

[0199] Corollary 4 Assuming the hardness of the Nth Residuosity assumption, there exists a four round secure fuzzy threshold token generation protocol in the Random Oracle model satisfying Definition 7 with respect to the Cosine Similarity distance function. The protocol works for any n, for threshold t = 3, and is secure against a malicious adversary that can corrupt any one party.

A. Construction

[0200] We first list some notation and the primitives used before describing our construction.

[0201] Let d denote the threshold value for the Cosine Similarity function. Let

(Share, Recon) be a (2, n) threshold secret sharing scheme. Let the n parties be denoted by P1 Pn respectively. Let l denote the security parameter. Let W denote the distribution from which the random vector is sampled. Let’s assume the vectors are of length £ where £ is a polynomial in l. Each element of this vector is an element of a field F over some large prime modulus q.

[0202] Let TS = (TS. Gen, TS. Sign, TS. Combine, TS. Verify) be a threshold signature scheme. Let (SKE. Enc, SKE. Dec) denote a secret key encryption scheme. Let (Garble, Eval) denote a garbling scheme for circuits. Let Sim. Garble denote the simulator for this scheme. Let OT = (OT. Setup, OT. Round!, OT. Round2, OT. Output) be a two message oblivious transfer protocol in the CRS model. Let OT. Sim denote the simulator. Let OT. Sim. Setup denote the algorithm used by the simulator to generate a simulated CRS and OT. Sim. Round2 denote the algorithm used to generate the second round message against a malicious receiver.

[0203] Let AHE = (AHE. Setup, AHE. Enc, AHE. Add, AHE. ConstMul, AHE. Dec) be the algorithms of a circuit-private additively homomorphic encryption scheme. Let

(N1ZK. Prove, N1ZK. Verify) denote a non-interactive zero knowledge argument of knowledge system in the RO model. Let RO denote the random oracle. Let N1ZK. Sim denote the simulator of this argument system and let N1ZK. Ext denote the extractor. Let PRF denote a pseudorandom function that takes inputs of length £ .

[0204] We now describe the construction of our four round secure fuzzy threshold token generation protocol ncs for Cosine Similarity.

1. Registration

[0205] In the registration phase, the following algorithm is executed by a trusted authority. The trusted authority may be a device that will participate in subsequent biometric matching and signature generation. If the trusted authority will potentially participate in biometric matching and signature generation, it can delete the information that it should not have at the end of the registration phase. That is, it should delete the shares corresponding to the other devices.

[0206] - Sample a random vector w from the distribution W. For simplicity, let’s assume that the L 2 -norm of w is 1.

- Compute (ppTS, vkTS, sk , ... , sk s) <- TS. Gen(l , n, k).

- For each i e [n], give (ppTS, vkTS, sk s) to party R^.

- For each i E [n], do the following:

* Compute (w^V;) <- Share (ΐL w, n, 2).

* Compute

* Let Wi = (wi 1, ... , wi g). For all j e [£], compute ct ί ;· = AHE. Enc(pki, wi ;·; rw .).

* Give (Wi, ski pk;, {¾·, rWij.};eM) to party
all the other parties.

[0207] The trusted device samples a biometric template w from a distribution W. W can be the raw biometric data that is collected by a biometric sensor. For simplicity, assume that w is normalized. The trusted device then generates a public key, shares of a secret key and any other needed public parameters for a threshold signature scheme, following any suitable key sharing algorithm. Examples of key sharing algorithms include Shamir’s secret sharing.

[0208] The 1th device, of the n total devices, receives the public parameters, ppTA, the public key, vkTA, and the 1th share of the secret key, skJA for a secret key encryption scheme. Then, the trusted device can compute (wL, v ), where wL is the 1th share of the biometric template. The trusted device can also calculate shares of the public key and secret key for an additively homomorphic encryption scheme, (sk£, pk£). Then the trusted device can encrypt each of the l components of wL as ct ί ;· . The 1th device receives its shares of the template, secret key, private key, and encrypted values, as well as the random factor used in the encryption. All of the other devices receive the complement of the template share, the public key share, and the encrypted template, without the random factor. Thus by the end of the registration phase, each device has the full public key and the full encrypted template.

2. Setup

[0209] Set up can also be done by a trusted authority. The trusted authority may be the same trusted device that completed the registration. If the trusted authority will potentially participate in biometric matching and signature generation, it can delete the information that it should not have at the end of the registration phase. That is, it should delete the shares corresponding to the other devices.

[0210] For each i E [n], the setup algorithm does the following:

- Generate crs£ <- OT. Setup (1l).

- Generate random keys (k , ki b, ki c, ki d, ki p, ki q, ki z, ki c, ki enc< ki 0t) for the PRF.

- Give (crSi) to party P^.

Give (crsi, kj a, kj b, kj c, kj d, kj p, kj q, kj z, kj kj enc, kj
to all other parties.

[0211] In the setup phase, the primary device can generate and distribute a number of constants that can be used later for partial computations. The trusted device can generate a common reference string for oblivious transfer set up, and a plurality of random keys for a pseudorandom function. The 1th device receives the 1th crs, and all other parties receive the 1th crs and the 1th set of random keys.

3. SignOn

[0212] In the SignOn phase, let’s consider party that uses an input vector u and a message m on which it wants a token. picks two other parties Py and Pk and interacts with them in the below four round protocol. In the SignOn phase, a biometric measurement is matches to a template and a signature is generated for an authentication challenge. A primary device, P^, selects two other devices of the n total devices, Py and Pk to participate. The sign on phase happens with four rounds of communication. The primary device has a measured biometric u and a message m on which it wants a token. The arrowhead on Round 1 can denote that in this round messages are outgoing from party Pk

[0213] Round 1 : (P^ ®) Party P^ does the following:

i. Let

ii. Let

iii. For each j E [£], compute the following:

• ct 1 ;· = AHE. Enc(

• p1 ;· <- N1ZK. Prove^G , wit1 ;·) for the statement
using witness wiG = (uy, G1 ;·).

• ct2 y = AHE. ConstMul(pki, ct1y, Uy; r2 y).

• 7T2 y <- N1ZK. Prove(st2 y, wit2 y) for the statement st2 y =

(ct1y< ctx y, ct2 y, pkj) e L2 using witness wit2 y = (Uy, Gy, Uy, Gy, r2 ).

• ct3j- = AHE. ConstMul(pki, ct1 ;·, wi ;·; r3j).

• 7T3 ,J <- N1ZK. Prove(st3 , wit3j) for the statement st3 j- =

(ct17< ctj , ct3 j, pkj) e L2 using witness

iv. To both parties in S, send msg-L = (S, m, {ct1 ;·, ct2j·, ct3 j, p1 ;·, n2j, ¾7};E[ ]) ·

[0214] Round 7: For each component of the measurement vector, makes a series of computations and non-interactive zero knowledge proofs on those computations. First it encrypts the component and generates a proof that the encryption is valid. Then it homomorphically computes the product of the component with itself as part of the proof that u is normalized. Finally, it computes the product of u with wL for the component, and generates a proof that the multiplication is valid. Then Pj sends the message, the computed values, and the proofs to the other two devices.

[0215] Round 2 : (® P^ Both parties P,· and Pk do the following:

i. Abort if any of the proofs {p1 ;·, n2 , ^3 }je[e] don’t verify.

ii. Generate the following randomness:

• a = PRF(ki a, msg1), b = PRFCki^ msgi).

• c = PRF(ki c, msgi),

• p = PRF(ki p, msgi),

• rz = PRF(ki z, msgi).

iii. Using the algorithms of AHE and using randomness PRF(ki enc, msg-i), compute ctX l, ctx 2, cty l, cty 2, ctz l, ctz 2 as encryptions of the following:

• x1 = (u, Wi), x2 = (a · x1 + b)

yi = <¾ u), y2 = (c - yi + d)

• z1 = (<u,Vi) + rz), z2 = (p · z1 + q)

iv. Send

[0216] Round 2 Both parties P and Pk verify the proofs. If any proofs aren’t valid, they can abort the protocol. This ensures that Rέ is trustworthy and did not send an invalid value for u to attempt to force a match. Then each device uses the random keys provided in the Setup phase to generate pseudorandom values a, b, c, d, p, q, and rz. Because they received the same random keys associated with a message from Pj, they will generate the same random values.

[0217] They can then use the additively homomorphic encryption algorithms to compute some values, and one-time message authentication codes (MACs) for those values. The values are: the inner product of the measurement with Pj’s share of the template (cc), the inner product of the measurement with itself (yx), and the inner product of the measurement with the complement plus the random value rz (Z-L). The associated MACs are x2, y2, and z2. The inner product with P^s share of the template can be done because that was sent as a ciphertext component wise, and then reconstructed as inner products on the full vector because of the homomorphic addition. The one-time MACs provide another check on P;. Even if P; attempts to change the computed values to force a match, the MAC’s will no longer correspond to the computed values. Then each party sends everything except for cc and yx to P;.

[0218] Round 3: (P; ® ) To each party in S , party P^ does the following:

i. Abort if the tuples sent by both P and Pk in round 2 were not the same.

ii. Compute xx = (u, Wi), x2 = AHE. Dec(ski, ctx 2).

iii. Compute yx = (u, u), y2 = AHE. Dec(ski, cty 2).

iv. Compute z1 = AHE. Dec(ski, ctz c), z2 = AHE. Dec(ski, ctz 2).

v. Generate and send

is picked uniformly at random.

[0219] Round 3 : Pj can compare the tuples sent by the other two parties. If they do not match, Pj can abort the protocol. This check is to ensure that P,· and Pk are trustworthy.

Because they began with the same random keys, all of their calculations should be the same. We assume that the two devices did not collude to send invalid values because the devices are presumed to only communicate with the primary device. Then P; computes xx and yx for itself, and decrypts the messages containing zc, x2, y2, and z2. Then
sends an oblivious transfer message to each of the other two devices to pass garbled inputs for the garbled circuit used in Round 4.

[0220] Round 4: (R ® Rέ) Party F} does the following:

i. Compute rc = PRF(ki c, msg-L).

ii. Compute C = Garble(C; rc) for the circuit C described in FIG. 7.

iii- Let {

iv. For each s e {x,y, z} and each t e {0,1}, let labS? t, lab* t denote the labels of the garbled circuit C corresponding to input wires st. Generate otff11 = OT. Round2(crSi, labS? t, labg t, ot*tec; rsott).

V. Let

vi. Pick a random string Pad = PRF(ki c, msg3).

vii. Set OneCT) = SKE. Enc(Pad, TS. Sign(skJs, m)).

viii. Send (C, otSen, OneCT ) to P;.

[0221] Round 4: (Pk ® Pj) Party Pk does the following:

i. Compute C, otSen, Pad exactly as done by I

ii. Set OneCTk = SKE. Enc(Pad, TS. Sign(skks, m)).

iii. Send

[0222] Round 4 Both P and Pk can generate a garbled circuit, as shown in FIG. 7, and prepare to send it. Each device also generates an appropriate string Pad. The two devices should create the same garbled circuit and a string Pad because they have the same input parameters due to the shared randomness established in the Setup phase. The circuit checks then that each value/MAC pair agrees, and aborts if there are any that do not match. It then computes the inner product. If the inner product is greater than the predetermined threshold, then the circuit outputs a string Pad. If the inner product is not greater than the threshold, then the circuit outputs a failure marker. The random constants used to check the MAC are hardwired into the circuit, so P; cannot learn those values and forge a result.

[0223] Then Vj computes a partial signature for the secret key encryption scheme, using the string Pad and the secret key share from the registration phase. Pk does the same with its own key share. Then Py sends to P; sends the garbled circuit and its partial computation. Pk sends their partial computation and a random oracle hash of the circuit.

[0224] Output Computation: Parties F), Pk output (m, P , S). Additionally, party P does the following to generate a token:

i. Let (C, otSen, OneCT ) be the message received from F) and (msg4, OneCTk) be the message received from Pk .

ii. Abort if RO(C, otSen) ¹ msg4.

iii. For each s e {x,y, z} and each t e {0,1}, compute labs t =

OT. Output

iv. Let lab— {labs t}seyX y Zj teyo i} ·

v. Compute Pad = Eval(C, lab).

vi. Compute To ken = SKE. Dec(Pad, OneCTy), Tokenk = SKE. Dec(Pad, OneCTk), Token; = TS. Sign(sk s, m).

vii. Compute Token <- TS. Combine({Tokens}sey;y kj).

viii. Output Token if TS. Verify(vkTS, m, Token). Else, output 1.

[0225] Output Computation. P; can now generate a token. It can check if the hash of the circuit sent by F) the matches the hash sent by Pk, and if not, abort the computation. This ensures that the two other parties are behaving correctly. P; can then compute the appropriate labels for the garbled circuit and evaluate the circuit to determine a string Pad (if there is a match). Using the string Pad, P; can decrypt the partial computations sent by the other two parties and do its own partial computation of the signature. Finally, P; can combine the partial computations and create a complete token.

4. Token Verification

[0226] Given a verification key vkTS, message m and token Token, the token verification algorithm outputs 1 if TS. Verify (vkTS, m, Token) outputs 1.

[0227] The correctness of the protocol directly follows from the correctness of the underlying primitives.

B. Security Proof

[0228] In this section, we formally prove Theorem 3.

[0229] Consider an adversary A who corrupts a party P . The strategy of the simulator Sim for our protocol ncs against a malicious adversary A is described below. Note that the registration phase first takes place at the end of which the simulator gets the values to be sent to p which it then forwards to A.

1. Description of Simulator

[0230] Setup: For each i E [n], Sim does the following:

- Generate

- Generate random keys (k; a, ki b, k^ c, ki d, k^ p, ki q, ki z, k; c) for the PRF.

- If Pi = P Give (crSi) to A.

- Else, give (crsj, k a, kj j^, k c, k kj p, k q, k z, k c) to A..

[0231] SignOn Phase: Case 1 - Honest Party as Pj

[0232] Suppose an honest party that uses an input vector u and a message m for which it wants a token by interacting with a set .S' of two parties, one of which is P . The arrowhead in Round 1 can denote that in this round messages are outgoing from the simulator. Sim gets the tuple (m, S) from the ideal functionality yDiFuz ar|d interacts with the adversary A as below:

• Round 1: (Sim ®) Sim does the following:

(a) For each j E [£], compute the following:

* For each t e {1,2,3}, cttJ- = AHE. Enc(pki, m tJ·; rtJ) where (mtJ·, rtJ) are picked uniformly at random.

* p1 ;· <- N1ZK. Sim(st1 ) for the statement st1 ;· = (ct1 ;·, pkj) e Lj .

* n2j N1ZK. Sim(st2j) for the statement

* 7T3 <- NIZK. Sim(st3j) for the statement st3 j- = (ct1 ;·, cti ;·, ct3j·, pk;) e L2.

(b) Send msg-L = (S, m, {ct17·, ct2j·, ct3j·, p1 ;·, n2 , n3 }je[{]) to A.

• Round 2: (® Sim) On behalf of the corrupt party P L, receive (ctx 2, cty 2, ctz 1, ctz 2) from the adversary.

• Round 3: (Sim ® ) Sim does the following:

(a) Abort if the ciphertexts (ctx 2, cty 2, ctz l, ctz 2) were not correctly computed using the algorithms of AHE, vector
and randomness
ki q, ki z, k; enc).

(b) Generate and send

where m°lt, r°j are picked uniformly at random.

• Round 4: (® Sim) On behalf of the corrupt party P^, receive (C, otSen, OneCT) from the adversary.

• Message to Ideal Functionality TDiFuz: Sim does the following:

(a) Abort if (C, otSen) were not correctly computed using the respective algorithms and randomness (ki ot,, kt c).

(b) Else, instruct the ideal functionality TOiFuz to deliver output to the honest party P;.

[0233] SignOn Phase: Case 2 - Malicious Party as P*

[0234] Suppose a malicious party is the initiator Pj . Sim interacts with the adversary <A as below:

• Round 1: (® Sim) Sim receives msg-L = (.S', m, {ct1 ;·, ct2 ;·, ct3 7, p1 ;·, p2 ;·, ^3,j}je[i]) from the adversary L on behalf of two honest parties F , Pk .

• Round 2: (Sim ® ) Sim does the following:

(a) Message to Ideal Functionality TDiFuz:

i. Run the extractor N1ZK. Ext on the proofs (p1 ;·, n2 , ^3 }je[e] compute u.

ii. Query the ideal functionality EDiFuz with inp^ = (m, u, S) to receive output out^

(b) Generate (ctx 2, cty 2, ctz l, ctz 2) as encryptions of random messages using public key pkj and uniform randomness. Send them to <A.

• Round 3: (® Sim) Sim receives msg3 = (ot^c}se[x y zj te{i 2j from the adversary L on behalf of both honest parties P,· and Pk .

• Round 4: (Sim ®) Sim does the following:

(a) Pick a value Pad uniformly at random.

(b) if out^ ¹±:

* Let out^ = (Token,·, Tokenfc).

* Compute (Csim, labsim) <- Sim. Garble(Pad).

* Let labsim— {labs t}se{x,y ,z} ,tE{o,i}

* For each s e (x, y, z} and each t e {0,1}, compute

* Compute otSen = {ot¾n}sE{x,y,z},tE{o,i}·

* Set OneCT) = SKE. Enc(Pad, Token,·) and OneCTfc = SKE. Enc(Pad, Tokenfc).

(c) if out^ ¹±:

* Compute (Csim, labsim) <- Sim. Garble(l).

* Let labsim— {labs t}se{x,y ,Z},te{o,i}

* For each s e {x, y, z} and each t e {0,1}, compute

* Compute otSen = {ot¾n}se{Xy Z} te{( }.

* Set OneCT) = SKE. Enc(Pad, r,·) and OneCTfc = SKE. Enc(Pad, rk ) where r,· and rk are picked uniformly at random.

(d) Send (Csim, otSen, OneCT )) and (RO(Csim, otSen), OneCTfc) to A.

2. Hybrids

[0235] We now show that the above simulation strategy is successful against all malicious PPT adversaries. That is, the view of the adversary along with the output of the honest parties is computationally indistinguishable in the real and ideal worlds. We will show this via a series of computationally indistinguishable hybrids where the first hybrid Hyb0 corresponds to the real world and the last hybrid Hyb8 corresponds to the ideal world.

[0236] 1 Hyb0 - Real World: In this hybrid, consider a simulator SimHyb that plays the role of the honest parties as in the real world.

[0237] When Honest Party is P*:

[0238] 2. Hyt»! - Case 1: Aborts and Message to Ideal Functionality. In this hybrid,

SimHyb aborts if the adversary’s messages were not generated in a manner consistent with the randomness output in the setup phase and also runs the query to the ideal functionality.

[0239] That is, SimHyb runs the“Message To ldeal Functionality” step as done by Sim after round 4 of Case 1 of the simulation strategy. SimHyb also performs the Abort check step in step 1 of round 3 of Case 1 of the simulation.

[0240] 3. Hyb2 - Case 1: Simulate NIZKs. In this hybrid, SimHyb computes simulated

NIZK arguments in round 1 of Case 1 as done by Sim in the ideal world.

[0241] 4 Hyb3 - Case 1: Switch Ciphertexts. In this hybrid, SimHyb computes the ciphertexts in round 1 of Case 1 using random messages as done in the ideal world.

[0242] 5. Hyb4 - Case 1: Switch OT Receiver Messages. In this hybrid, SimHyb computes the OT receiver messages in round 3 of Case 1 using random inputs as done in the ideal world.

[0243] When Corrupt Party is Pf :

[0244] 6. Hyb5 - Case 2: Message to Ideal Functionality. In this hybrid, SimHyb runs the“Message To ldeal Functionality” step as done by Sim in round 2 of Case 2 of the simulation strategy. That is, SimHyb queries the ideal functionality using the output of the extractor N1ZK. Ext on the proofs given by c ft in round 1.

[0245] 7. Hyb6 - Case 2: Simulate OT Sender Messages. In this hybrid, SimHyb computes the CRS during the setup phase and the OT sender messages in round 4 of Case 2 using the simulator OT. Sim as done in the ideal world.

[0246] 8 Hyb7 - Case 2: Simulate Garbled Circuit. In this hybrid, SimHyb computes the garbled circuit and associated labels in round 4 of Case 2 using the simulator Sim. Garble as done in the ideal world.

[0247] 9. Hyb8 - Case 2: Switch Ciphertexts. In this hybrid, SimHyb computes the ciphertexts in round 2 of Case 2 using random messages as done in the ideal world. This hybrid corresponds to the ideal world.

[0248] We will now show that every pair of successive hybrids is computationally indistinguishable.

[0249] Lemma 5 Hyb0 is statistically indistinguishable from Hyb^

[0250] Proof. When an honest party initiates the protocol as the querying party P^, let’s say it interacts with parties P and Pk such that P;- is corrupt. In Hyb0, on behalf of P^, SimHyb checks that the messages sent by both parties F^- and Pk are same and if so, computes the output on behalf of the honest party. Since Pk is honest, this means that if the messages sent by both parties are indeed the same, the adversary L , on behalf of
did generate those messages honestly using the shared randomness generated in the setup phase and the shared values generated in the registration phase.

[0251] ln Hyb-L, on behalf of P^, SimHyb checks that the messages sent by the adversary on behalf of Py were correctly generated using the shared randomness and shared values generated in the setup and registration phases and if so, asks the ideal functionality to deliver output to the honest party. Thus, the switch from Hyb0 to Hybj is essentially only a syntactic change.

[0252] Lemma 6 Assuming the zero knowledge property of the N1ZK argument system, Hyb1 is computationally indistinguishable from Hyb2.

[0253] Proof. The only difference between the two hybrids is that in Hyb-^ SimHyb computes the messages of the N1ZK argument system by running the honest prover algorithm N1ZK. Prove(-), while in Hyb2, they are computed by running the simulator N1ZK. Sim(·). Thus, we can show that if there exists an adversary A that can distinguish between the two hybrids with non-negligible probability, we can design a reduction c/ZN I ZK that can distinguish between real and simulated arguments with non-negligible probability thus breaking the zero knowledge property of the N1ZK argument system which is a contradiction.

[0254] Lemma 7 Assuming the semantic security of the additively homomorphic encryption scheme AHE , Hyb2 is computationally indistinguishable from Hyb3.

[0255] Proof. The only difference between the two hybrids is that in Hyb2, SimHyb computes the ciphertexts in round 1 by encrypting the honest party’s actual inputs (u, W;), while in Hyb3, the ciphertexts encrypt random messages. Thus, we can show that if there exists an adversary A that can distinguish between the two hybrids with non-negligible

probability, we can design a reduction c/ZAH E that can distinguish between encryptions of the honest party’s actual inputs and encryptions of random messages with non-negligible probability thus breaking the semantic security of the encryption scheme AHE which is a contradiction.

[0256] Lemma 8 Assuming the security of the oblivious transfer protocol OT against a malicious sender, Hybs is computationally indistinguishable from Hyb4.

[0257] Proof. The only difference between the two hybrids is that in Hyb3, SimHyb computes the OT receiver’s messages as done by the honest party in the real world, while in Hyb4, the OT receiver’s messages are computed by using random messages as input. Thus, we can show that if there exists an adversary A that can distinguish between the two hybrids with non-negligible probability, we can design a reduction <A that can distinguish between OT receiver’s messages of the honest party’s actual inputs and random inputs with non-negligible probability thus breaking the security of the oblivious transfer protocol OT against a malicious sender which is a contradiction.

[0258] Lemma 9 Assuming the argument of knowledge property of the NIZK argument system, Hyb4 is computationally indistinguishable from Hyb5.

[0259] Proof. The only difference between the two hybrids is that in Hyb5, SimHyb also runs the extractor N1ZK. Ext on the proofs given y the adversary to compute its input u. Thus, the only difference between the two hybrids is if the adversary can produce a set of proofs {nf such that, with non-negligible probability, all of the proofs verify successfully, but SimHyb fails to extract u and hence SimHyb aborts.

[0260] However, we can show that if there exists an adversary L that can this to happen with non-negligible probability, we can design a reduction c/ZN IZ K that breaks the argument of knowledge property of the system N1ZK with non-negligible probability which is a contradiction.

[0261] Lemma 10 Assuming the security of the oblivious transfer protocol OT against a malicious receiver, Hyb5 is computationally indistinguishable from Hybe.

[0262] Proof. The only difference between the two hybrids is that in Hyb5, SimHyb computes the OT sender’s messages by using the actual labels of the garbled circuit as done by the honest party in the real world, while in Hyb6, the OT sender’s messages are computed by running the simulator OT. Sim. In Hyb6, the crs in the setup phase is also computed using the simulator OT. Sim.

[0263] Thus, we can show that if there exists an adversary L that can distinguish between the two hybrids with non-negligible probability, we can design a reduction <A that can distinguish between the case where the crs an OT sender’s messages were generated by running the honest sender algorithm from the case where the crs and the OT sender’ s messages were generated using the simulator OT. Sim with non-negligible probability thus breaking the security of the oblivious transfer protocol OT against a malicious receiver which is a contradiction.

[0264] Lemma 11 Assuming the correctness of the extractor NIZK. Ext and the security of the garbling scheme, Hyb6 is computationally indistinguishable from Hyb7.

[0265] Proof. The only difference between the two hybrids is that in Hyb6, SimHyb computes the garbled circuit by running the honest garbling algorithm Garble using honestly generated labels, while in Hyb7, SimHyb computes a simulated garbled circuit and simulated labels by running the simulator Sim. Garble on the value ou ^ output by the ideal functionality. From the correctness of the extractor N1ZK. Ext, we know that the output of the garbled circuit received by the evaluator ( L ) in Hyb6 is identical to the output of the ideal functionality out^ used in the ideal world. Thus, we can show that if there exists an adversary A that can distinguish between the two hybrids with non-negligible probability, we can design a reduction c/ZGa rbl e that can distinguish between an honestly generated set of input wire labels and an honestly generated garbled circuit from simulated ones with non-negligible probability thus breaking the security of the garbling scheme which is a contradiction.

[0266] Lemma 12 Assuming the circuit privacy property of the additively homomorphic encryption scheme AHE , Hyb7 is computationally indistinguishable from Hyb8.

[0267] Proof. The only difference between the two hybrids is that in Hyb7, SimHyb computes the ciphertexts sent in round 2 by performing the homomorphic operations on the adversary’s well-formed ciphertexts sent in round 1 exactly as in the real world, while in Hyb3, SimHyb generates ciphertexts that encrypt random messages. Thus, we can show that if there exists an adversary A that can distinguish between the two hybrids with non-

negligible probability, we can design a reduction c/ZAH E that can break the circuit privacy of the circuit private additively homomorphic encryption scheme AHE which is a contradiction.

C. Euclidean Distance

[0268] Recall that given two vectors u, w, the square of the Euclidean Distance EC. Dist between them relates to their Cosine Similarity CS. Dist as follows:

EC. Dist(u, w) = ((u, u) + (w, w)— 2 · CS. Dist(u, w))

[0269] Thus, it is easy to observe that the above protocol and analysis easily extends for Euclidean Distance too.

VIII. COSINE SIMILARITY USING DEPTH-1 THRESHOLD THE

[0270] In this section, we show how to efficiently construct a fuzzy threshold token generation protocol for cosine similarity using any depth- 1 FHE scheme with threshold decryption.

A. Construction

[0271] We first list some notation and the primitives used before describing our construction.

[0272] Let 1P be a function that takes as input a template w, a measurement u and outputs the inner product (w, u), and let d denote the threshold inner-product value to denote a match between w and u. Let the n parties be denoted by P lt ... , Pn respectively. Let l denote the security parameter. Let W denote the distribution from which a random template vector is sampled. Let’s assume the vectors are of length - where ^ is a polynomial in l. Let C be a circuit that takes as input a template w, a measurement u and some string K, and outputs K if the distance between w and u is below some (pre-defmed) threshold; otherwise it outputs 0.

[0273] Let TFHE = (TFHE. Gen, TFHE. Enc, TFHE. PartialDec, TFHE. Eval,

TFHE. Combine) be a depth-l threshold FHE scheme. Let TS =

(TS. gen, TS. Sign, TS. Combine, TS. Verify) be a threshold signature scheme. Let SKE = (SKE, Gen, SKE. Enc, SKE. Dec) denote a secret key encryption scheme. Let PKE =

(PKE. Gen, PKE. Enc, PKE. Dec) denote a public key encryption scheme.

[0274] Let (Gen, Sign, Verify) be a strongly-unforgeable digital signature scheme. Let N1ZK = (N1ZK. Setup, N1ZK. Prove, N1ZK. Verify) denote a non-interactive zero knowledge argument. Let H be a collision-resistant hash function (modeled later as a random oracle).

[0275] We now describe the construction of our six round secure fuzzy threshold token generation protocol pIR for any n and k.

1. Registration

[0276] In the registration phase, the following algorithm is executed by a trusted authority:

• Sample a random w from the distribution W.

• Compute

• Compute

• Compute

• Compute K <- SKE, Gen(lA).

• Compute crs - N1ZK. Setup (l· ).

• Compute the ciphertexts

ct o <— TFHE. Enc(pk, w), ct 1 <- TFHE. Enc(pk, K).

• For each i E [n], do the following:

a) Compute

b) Compute Kt = H K, i ).

c) Give the following to party P t


crs).

[0277] A trusted authority (e.g., primary user device) can sample a biometric template w from a distribution of a biometric measurement W. The trusted authority can also compute a public key pk and a plurality of private key shares sk^ for a threshold fully homomorphic encryption scheme TFHE, in addition to public parameters ppTS, a verification key vkTS, and a plurality of private key shares sk s for a threshold signature scheme TS. The trusted authority can also compute a public key pk and a secret key sk for a public key encryption scheme PKE and a string K for a secret key encryption scheme SKE. Using the public key pk, the trusted authority can encrypt the biometric template w to form ciphertext ct0 and encrypt the string K to form ciphertext ct! .

[0278] Then, the trusted authority can compute a plurality of values for each electronic device i of n electronic devices (e.g., a first electronic device and other electronic devices). The trusted authority can compute a secret key share sk'j and a verification key share vk'j for a digital signature scheme. The trusted authority can also compute a hash Kt using the string K and a hash function H. The trusted authority can then send to each electronic device P; the public key pk, the ciphertexts ct0 and c ^ verification key shares (vk·, vk·), secret key share sk^, public parameters ppTS, verification key vkTS, private key share sk s, and hash

2. SignOn Phase

[0279] In the SignOn phase, let’s consider party P * that uses input vector u , a message m on which it wants a token. P* interacts with the other parties in the below four round protocol. The arrowhead in round 1 can denote that in this round messages are outgoing from party P*.

• Round 1: (P* ®) Party P* does the following:

a) Compute the ciphertext ct * = TFHE. Enc(pk, u).

b) Pick a set S consisting of t parties amongst P1 Pn. For simplicity, without loss of generality, we assume that P * is also part of set S.

c) To each party P i E S, send (ct*, m).

• Round 2: (® P*) Each Party P t E S (except P *) does the following:

a) Compute the signature s' L = Sign(sk'i, ct*).

b) Send s to the party P*.

• Round 3: (P * ® ) Party P * sends (a'l ... , s'h) to each party P t.

• Round 4: (® P*) Each Party P t e ^(except P *) does the following:

a) If there exists i e [n] such that Verify(vk'i, ct*, s' t) ¹ 1, then output 1.

b) Otherwise, evaluate the ciphertext

ct = TFHE. Eval(pk, 1P, ct0, ct*),

and compute a partial decryption of ct as:

mi = TFHE. PartialDec(skj, ct).

c) Send
the party P*.

• Round 5:(P * ®) Party P * does the following:

a) Recover IP(v, u) = TFHE. Combine^, ... , mh ).

b) Compute C0 <- PKE. En c(pk,

c) For each i e [n], compute Ct

d) Send to each party P t the tuple

(C0, Cl ... , Cn, 7r),

where p is a NIZK proof (generated using crs by the algorithm NIZK. Prove) for the following statement (denoted as g subsequently):

The ciphertext C0 encrypts an inner-product m0 and each ciphertext Ct encrypts some message gtfor i E [n] under the public key pk such that:

i. m0 < d.

ii. m0 = TFHE. Combine^, ... , mh).

• Round 6: (® P*) Each Party (P i E ^(except P *) does the following:

a) If N1ZK. Verify(crs, g, p) ¹ 1, then output 1.

b) Otherwise, compute a partial decryption of ct as:

mί = TFHE. PartialDec(skj, ct- .

(c) Compute Tokenj = TS. Sign(skj, m) and ¾ <- SKE. Enc( j, Tokenj).

(d) Send (mί 1, ctj) to the party P*.

• Output Computation: Party P * does the following to generate a token:

(a) Recover K = TFHE. Combine^ ... , mh,i)·

(b) For each i E [n], do the following:

i. Compute Kt = H( , i).

ii. Recover Token; = SKE. Dec(K), ¾).

(c) Compute <- TS. Combine({Token ie5).

(d) Output Token if TS. Verify(vkTS, m, Token) = 1. Else, output 1.

[0280] A first electronic device, P* can encrypt the input vector u (e.g., the biometric measurement vector) with the public key pk of the threshold fully homomorphic encryption scheme to generate an encrypted biometric measurement ciphertext ct*. The first electronic device can send the encrypted biometric measurement ct* and the message m to each of the other electronic devices. Each of the other electronic devices can compute a partial signature computation s' ) with the ciphertext ct* and the secret key share sk' . Each electronic can send the partial signature computation s' L to the first electronic device. The first electronic device can send all of the partial signature computations {s' it ... ,
to all of the other electronic devices.

[0281] Each of the other electronic devices can verify each of the partial signature computations {o' , ... ,
with the ciphertext ct* and the received verification keys

(vki, ...., vk( . If any of the partial signature computations are not verified (e.g.,

Verify(vk'i, ct*, s' ) ¹ 1), the electronic device can output 1, indicating an error. An unverified signature can indicate that one (or more) of the electronic devices did not compute the partial signature computation correctly, and thus may be compromised or fraudulent.

After verifying the partial signature computations, each of the other electronic devices can evaluate the ciphertexts ct*, and ct0 to generate a new ciphertext ct. Evaluating the ciphertexts may include computing an inner product between the template w (in ciphertext ct0) and the measurement u (in ciphertext ct*). The inner product can then be used, for example, to compute a cosine similarity distance measure or a Euclidean distance. Each of the other electronic devices can then compute a partial decryption m of the ciphertext ct. The partial decryption m; can then be send to the first electronic device.

[0282] The first electronic device can combine the partial decryptions (m1 ... ,mh) using threshold fully homomorphic encryption to recover the inner product lP(w, u) of the template w and the measurement u. The first electronic device can encrypt the inner product lP(w, u) with a public key encryption scheme to generate a ciphertext C0, and encrypt each partial decryption mέ to generate a plurality of ciphertexts CL . The first electronic device can also generate a non-interactive zero knowledge proof p. The proof p can be a proof that C0 encrypts an inner product m0 and each ciphertext encrypts a value m;. The proof p can also state that the inner product (and thus the distance measure) is less than a threshold d , and that the inner product m0 is the result of the combination of the partial decryptions (m1, ... ,mh).

The first electronic device can then send to each of the other electronic devices a tuple with the ciphertexts and the proof (C0, Cx, ... , Cn, p).

[0283] Each of the other electronic devices can verify the proof p. If the proof is not verified, the electronic device can output 1 and abort. If the proof p is verified, the electronic device can compute a partial decryption mί of ct-^ the encryption of the string K. A partial threshold signature token Token; can be generated by each of the other electronic devices

using the secret key share sk; and the message m. A ciphertext ct; can be computed as a secret key encryption with the partial threshold signature token Token; and the hash KL. The partial decryption mί and the ciphertext ct; can then be send to the first electronic device.

[0284] The first electronic device can homomorphically combine the partial decryptions ... , mh,i) to recover the string K. Then the first electronic device can compute hash Kt for each electronic device i using the string K and a hash function H , then use the hash Kt to decrypt the secret key encryption ciphertext ct£ to recover the partial threshold signature token Token;. With each received partial threshold signature token Token;, the first electronic device can combine them to compute a signature token Token. If the first electronic device can verify the token Token and the message m, the first electronic device can output the token Token; otherwise, the first electronic device can output 1. Token Verification

[0285] Given a verification key vkTS, message m and Token, the token verification algorithm outputs 1 if TS. Verify (vkTS, m, Token) outputs 1.

B. Security Analysis

[0286] We prove informal Theorem 3 here. In fact, the proof is very similar to the proof of the general case using TFHE. So we omit the details and only provide a brief sketch. Note that, this protocol realizes J TTG with a slightly weaker guarantee as defined by the leakage functions. In particular, the leakage functions Lc, L Lm return the exact inner product of the template w and the measurement u. This is because of the fact that, in the protocol, (viz. the Sign On phase), the initiator gets to know this value. In the simulation this comes up when the Sign On session is initiated by a corrupt party. In that case, the simulator issues a "Test Password" query on the input of the initiator and learns the inner product that it sends to the adversary then. Other steps are mostly the same as that of the simulator of the generic protocol and therefore we omit the details.

IX. HAMMING DISTANCE

[0287] In this section, we show how to construct an efficient two round secure fuzzy threshold token generation protocol in the Random Oracle model for the distance measure being Hamming Distance. Our token generation protocol satisfies Definition 7 for any n, t and is secure against a malicious adversary that can corrupt up to (t— 1) parties.

[0288] Formally, we show the following theorem:

[0289] Theorem 5 Assuming the existence of threshold signatures as defined in Section III.E, threshold linear secret sharing, robust linear secret sharing, secret key encryption, UC-secure threshold oblivious pseudorandom functions, and collision resistant hash functions, there exists a two round secure fuzzy threshold token generation protocol satisfying

Definition 7 for the Hamming Distance function. The protocol works for any n, any threshold t, and is secure against a malicious adversary that can corrupt up to (t— 1) parties.

[0290] The threshold oblivious PRF can be built assuming the Gap Threshold One-More Diffie-Hellman assumption in the Random Oracle model [25] The threshold signature schemes we use can be built assuming either DDH/RSA [16, 17] Collision resistant hash functions are implied by the Random Oracle model. We will use the Shamir’s secret sharing scheme to instantiate the robust secret sharing scheme as described in Imported Lemma 1.

The other primitives can be built either unconditionally or assuming just one way functions. Thus, instantiating the primitives used in the above theorem, we get the following corollary:

[0291] Corollary 6 Assuming the hardness of <A E {Gap DDH, RSA) and Gap Threshold One-More Diffie-Hellman (Gap— TOMDH), there exists a two round secure fuzzy threshold token generation protocol in the Random Oracle model satisfying Definition 7 for the Hamming Distance function. The protocol works for any n, any threshold t, and is secure against a malicious adversary that can corrupt upto (t— 1) parties.

A. Construction

[0292] We first list some notation and the primitives used before describing our construction.

[0293] Let the n parties be denoted by P1 Pn respectively. Let A denote the security parameter. Let W denote the distribution from which the random vector is sampled. Let’s assume the vectors are of length £ where £ is a polynomial in A. Each element of this vector is an element of a field F over some large prime modulus q. Let d denote the threshold value for the Hamming Distance function. That is, two vectors w and u of length £ each can be considered close if their Hamming Distance is at most (£— d) - that is, they are equal on at least d positions.

[0294] Let (Share, Recon) be a (n, t) linear secret sharing scheme. Let

(RSS. Share, RSS. Recon, Thres. Recon) be a robust linear secret sharing scheme

as defined in Appendix A. That is, the secret can be reconstructed by running algorithm Thres. Re con given either exactly d honestly generated shares or by running algorithm

£+d

RSS. Recon given a collection of f shares of which— are honestly generated. Let TS =

(TS. Gen, TS. Sign, TS. Combine, TS. Verify) be the threshold signature scheme of Boldyreva [16] We note that a similar construction also works for other threshold signature schemes. Without loss of generality and for simplicity, we present our scheme here using the construction of Boldyreva [16]

[0295] Let (SKE. Enc, SKE. Dec) denote a secret key encryption scheme. Let TOPRF = (TOPRF. Setup, TOPRF. Encode, TOPRF. Eval, TOPRF. Combine) denote a UC-secure threshold oblivious pseudorandom function in the Random Oracle (RO) model. Let

TOPRF. Sim = (TOPRF. Sim. Encode, TOPRF. Sim. Eval, TOPRF. Sim. Ext) denote the simulator of this scheme where the algorithm TOPRF. Sim. Encode is used to generate simulated encodings, TOPRF. Sim. Eval is used to generate simulated messages on behalf of algorithm TOPRF. Eval. Algorithm TOPRF. Sim. Ext extracts the message being encoded based on the queries made to RO during the TOPRF. Combine phase. Let’s model a collision resistant hash function H via a Random Oracle.

[0296] We now describe the construction of our two round secure threshold fuzzy token generation protocol for Hamming Distance.

1. Registration

[0297] In the registration phase, the following algorithm is executed by a trusted authority:

- Compute (ppTS, vkTS, sk s, ... , skjs) <- TS. Gen^, f, d). Recall that (sk s, ... , skjs) is generated by running RSS. Share(skTS, f, d,

- For each i e [£\, compute (sk? , ... , sk^) <- Share(sk s, n, t).

- For each j e [n], give
party P

- Sample a random vector w from the distribution W. Let w = (wx, ... , we)

- Compute (ppT0PRF, sk 0PRF, ... , sk 0PRF) <- TOPRF. Setup(l;l, n, t). Let skT0PRF denote the combined key of the TOPRF.

- For each j E [n], give skJ0PRF to F^·.

- For each i E [£], do the following:

* Compute h; = TOPRF(sk T0PRF, w*).

* For each j E [n], compute h t = H(hi| |)) Give h tj to party Pj .

2. Setup

[0298] The setup algorithm does nothing.

3. SignOn Phase

[0299] In the SignOn phase, let’s consider party P* that uses an input vector u =

(u1 , q) and a message m on which it wants a token. P* interacts with the other parties in the below two round protocol.

- Round 1: (P* ®) Party P* does the following:

i. Pick a set S consisting of t parties amongst P1 ... , Pn. For simplicity, without loss of generality, we assume that P* is also part of set S.

ii. For each i e [£], compute q = TOPRF. Encode^; p ) using randomness pt .

iii. To each party P e S, send (S, m, , ... , q).

- Round 2: (® P*) Each Party P; e (except P*) does the following:

P+d

i. Compute (r1 ;·, ... , q;) <- RSS. Share(0, £, d,— ).

ii. For each i E [£], do:

• Compute TS. Sign(sk J, m). It evaluates to H(m)sk'v. Set yLj = H(m)(ski’/+riA

• Compute zί ;· = TOPRF. Eval(skJ0PRF, q).

• Compute ct ί ;· = SKE. Enc(hi , yi ).

• Send (zi j, cti ;) to party P*.

- Output Computation: Every party Vj E S outputs (m, R*, ). Additionally, party P* does the following to generate a token:

i. For each i E [£], do:

• Compute
Combine({zij};e5).

• For each j E S , compute h ί ;· = H(hi| |)) and then compute y ί ;· =

SKE. Dec(hi ;·, ct ί ;·).

• Let av an be the reconstruction coefficients of the (n, t) linear secret sharing scheme used to secret share sk- . Compute Token
(cti j).

£+d

Strategy 1: ( -) matches

i. Let
... , b{ be the reconstruction coefficients of the robust reconstruction

£+d

algorithm RSS. Recon of the (b, d, -y) linear robust secret sharing scheme used to secret share both skTS and 0 by each party in S.

ii. Compute Token = nie^(Token l).

iii. If TS. Verify(vkTS, m, Token), output Token and stop.

iv. Else, output 1.

Strategy 2: Only d matches

i. For each set T £ [£] such that |^| = d, do:

• Compute Token = nieJ>(Token l).

• If TS.Verify(vkTS, m, Token), output Token and stop.

ii. Else, output 1.

4. Token Verification

[0300] Given a verification key vkTS, message m and token Token, the token verification algorithm outputs 1 if TS. Verify (vkTS, m, Token) outputs 1.

[0301] The correctness of the protocol directly follows from the correctness of the underlying primitives.

B. Security Proof

[0302] In this section, we formally prove Theorem 5 .

[0303] Consider an adversary L who corrupts t* parties where t* < t. The strategy of the simulator Sim for our protocol pH0 against a malicious adversary <A is described below. Note that the registration phase first takes place at the end of which the simulator gets the values to be sent to every corrupt party which it then forwards to <A.

1. Description of Simulator

[0304] SignOn Phase: Case 1 - Honest Party as P*

[0305] Suppose an honest party P* that uses an input vector u = (u1 , ) and a message m for which it wants a token by interacting with a set S consisting of t parties, some of which could be corrupt. Sim gets the tuple (m, S) from the ideal functionality TDiFuz and interacts with the adversary L as below:

• Round 1 : (Sim ®) Sim does the following:

(a) For each i E [£], compute q = TOPRF. Sim. Encode^1; pt ) using randomness pt .

(b) To each malicious party P;- e S , send ( S , m, , ... , q).

• Round 2: (® Sim) For each i E [£], on behalf of each corrupt party P,· e , receive (z ij, ct i ) from the adversary.

• Message to Ideal Functionality TΌiΐnίt . Sim does the following:

(a) For each corrupt party P,·, do:

* For every i e [£], abort if z t ¹ TOPRF. Eval(sk 0PRF, ).

* Compute ytj = SKE. Dec(hi ;·, cti ;·).

* Let ?1 , b( be the reconstruction coefficients of the robust reconstruction

£+d

algorithm RSS. Recon of the (b, d,
linear robust secret sharing scheme used to secret share 0.

* Compute
. Abort

(b) Instruct the ideal functionality FDIFUZ to deliver output to the honest party P*.

[0306] SignOn Phase: Case 2 - Malicious Party as P*

[0307] Suppose a malicious party is the initiator P*. Sim interacts with the adversary L as below:

• Round 1 : (® Sim) Sim receives ( S , m, c , ... , cf) from the adversary L on behalf of every honest parties P,· E S.

• Round 2: (Sim ® ) On behalf of every honest parties F ) e S, Sim does the following for each i E [£] :

(a) Compute zί ;· = TOPRF. Sim. Eval(q).

(b) Pick ciphertext ct ί ;· uniformly at random.

(c) Send (z tj, cti ;) to party <A

• Output Computation Phase: Sim does the following:

(a) Message to Ideal Functionality TDiFuz:

i. For each i E [£], based on the adversary’s queries to the oracle RO, run algorithm TOPRF. Sim. Ext to compute u- . We assume that the evaluator has to make all the RO calls in parallel to allow for extraction. This can be enforced in the protocol design and we avoid mentioning this explicitly to ease the exposition.

ii. Query the ideal functionality with input u* = (u), , u)) to receive output out^

(b) For each i E [£], let h^ = TOPRF. Combine({zij};e5).

(c) On behalf of each honest party F for each i E [£], set H(h£ | |y) such that

SKE. Dec(h£j·, ct£j·) = H(m)Ri7 where R is chosen below.

(d) If out/j =1, do:

i. Every Ri ;· is picked uniformly at random.

(e) If out^ ¹±, For each honest party F do:

i. Pick
Share (

ii. Set R i = (sk- + r tj) where
are picked such that the adversary’s token output computation process results in output out^

2. Hybrids

[0308] We now show that the above simulation strategy is successful against all malicious PPT adversaries. That is, the view of the adversary along with the output of the honest parties is computationally indistinguishable in the real and ideal worlds. We will show this via a series of computationally indistinguishable hybrids where the first hybrid Hyb0 corresponds to the real world and the last hybrid Hyb7 corresponds to the ideal world.

[0309] 1 Hyb0 - Real World: In this hybrid, consider a simulator SimHyb that plays the role of the honest parties as in the real world.

[0310] When Honest Party is P*:

[0311] 2. Hybi - Case 1: Simulate TOPRF Encoding. In this hybrid, SimHyb computes the round 1 message on behalf of P* by running the simulator TOPRF. Sim. Encode(-) to compute the encoding q for each i E [£] as done in round 1 of the ideal world.

[0312] 3. Hyb2 - Case 1: Message to Ideal Functionality. In this hybrid, SimHyb runs the“Message To ldeal Functionality” step as done by Sim after round 2 of Case 1 of the simulation strategy instead of computing the output as done by the honest party P* in the real world.

[0313] When Corrupt Party is P*:

[0314] 4. Hyb3 - Case 2: Message to Ideal Functionality. In this hybrid, SimHyb runs the“Message To ldeal Functionality” step as done by Sim after round 2 of Case 2 of the simulation strategy. That is, SimHyb runs the extractor TOPRF. Sim. Ext to compute u* and queries the ideal functionality with this.

[0315] 5. Hyb4 - Case 2: Simulate TOPRF Evaluation. In this hybrid, in round 2,

SimHyb computes the TOPRF evaluation responses by running the algorithm

TOPRF. Sim. Eval as done in the ideal world.

[0316] 6. Hyb5 - Case 2: out^ =1. In this hybrid, when the output from the ideal functionality ou ^ =1, on behalf of every honest party Py, SimHyb sets the exponent of each y t to be a uniformly random value R ί ;· instead of (ski ;· + r ί ;·).

[0317] 7 Hyb6 - Case 2: out^ =1. In this hybrid, when the output from the ideal functionality out^ =1, instead of computing the ciphertext ct ί ;· as before, SimHyb picks ct ij uniformly at random and responds to the Random Oracle query as in the ideal world to set the decrypted plaintext.

[0318] 8. Hyb7 - Case 2: out^ ¹±. In this hybrid, when the output from the ideal functionality out^ ¹±, SimHyb computes the ciphertexts cti ;· and the responses to the RO queries exactly as in the ideal world. This hybrid corresponds to the ideal world.

[0319] We will now show that every pair of successive hybrids is computationally indistinguishable.

[0320] Lemma 13 Assuming the security of the threshold oblivious pseudorandom function TOPRF , Hyb0 is computationally indistinguishable from Hyb1.

[0321] Proof. The only difference between the two hybrids is that in in Hyb0, SimHyb computes the value q for each i E £] by running the honest encoding algorithm

TOPRF. Encode while in Hyb-^ it computes them by running the simulated encoding algorithm TOPRF. Sim. Encode. Suppose there exists an adversary L that can distinguish between the two hybrids with non-negligible probability. We can use A to construct an adversary c/ZT 0 PR F that can distinguish between the real and simulated encodings with non-negligible probability and thus break the security of the TOPRF which is a contradiction.

[0322] Lemma 14 Assuming the correctness of the threshold oblivious pseudorandom function T OPRF , correctness of the private key encryption scheme and correctness of the robust secret sharing scheme, Hyb1 is computationally indistinguishable from Hyb2.

[0323] Proof. The difference between the two hybrids is that in Hyb2, SimHyb checks whether the adversary did indeed computes its messages honestly and if not, aborts.

However, in Hybx, SimHyb aborts only if the output computation phase did not succeed. Thus, we can observe that assuming the correctness of the primitives used - namely, the threshold oblivious pseudorandom function TOPRF, the private key encryption scheme and the robust secret sharing scheme, Hybj is computationally indistinguishable from Hyb2.

[0324] Lemma 15 Assuming the correctness of the extractor T OPRF. Sim. Ext of the threshold oblivious pseudorandom function TOPRF , Hyb2 is computationally

indistinguishable from Hyb3.

[0325] Proof. The only difference between the two hybrids is that in Hyb3, SimHyb also runs the extractor TOPRF. Sim. Ext based on the adversary’s queries to the random oracle RO to extract the adversary’s input u*. Thus, the only difference in the adversary’s view is if SimHyb aborts with non-negligible probability because the extractor TOPRF. Sim. Ext aborts with non-negligible probability. Thus, we can show that assuming the correctness of the extractor TOPRF. Sim. Ext of the threshold oblivious pseudorandom function TOPRF, Hyb2 is computationally indistinguishable from Hyb3.

[0326] Lemma 16 Assuming the security of the threshold oblivious pseudorandom function TOPRF , Hybs is computationally indistinguishable from Hyb4.

[0327] Proof. The only difference between the two hybrids is that for every honest party P in Hyb3, SimHyb computes the value zί ;· for each i E [£] by running the honest evaluation algorithm TOPRF. Eval while in Hyb4, it computes them by running the simulated evaluation algorithm TOPRF. Sim. Eval. Suppose there exists an adversary L that can distinguish between the two hybrids with non-negligible probability. We can use A to construct an adversary c/ZT0PRF that can distinguish between the real and simulated evaluation responses with non-negligible probability and thus break the security of the TOPRF which is a contradiction.

[0328] Lemma 17 Assuming the correctness of the threshold oblivious pseudorandom function, security of the private key encryption scheme, security of the threshold linear secret sharing scheme and the security of the robust linear secret sharing scheme, Hyb4 is computationally indistinguishable from Hyb5.

[0329] Proof. In the scenario where the output from the ideal functionality out^ =1, we know that the adversary’s input u* matches with the vector w in at most (d— 1) positions. Therefore, from the correctness of the TOPRF scheme, for each honest party P,·, the adversary learns the decryption key for at most ( d— 1) of the £ ciphertexts {cti ; }ie|^ . Therefore, from the security of the secret key encryption scheme, the adversary leams at most ( d— 1) of the £ plaintexts {y£,;-}£eW.

[0330] Now, in Hyb4, for every party P,·, for every i E [£] for which the adversary learns the plaintext, the exponent in the plaintext y - is of the form (sk£ - + r £ ;) where r £ - is a secret sharing of 0 while in Hyb5, the exponent is picked uniformly at random. Therefore, since the secret sharing scheme statistically hides the secret as long as at most (d— 1) shares are revealed, we can show that if there exists an adversary L that can distinguish between the two hybrids with non-negligible probability, we can use A to construct an adversary c/Zshare that can distinguish between a real set of ( d— 1) shares and a set of (d— 1) random values with non-negligible probability, thus breaking the security of the secret sharing scheme which is a contradiction.

[0331] Lemma 18 Hyb5 is statistically indistinguishable from Hybe.

[0332] Proof. Notice that in going from Hyb5 to Hyb6, we only make a syntactic change. That is, instead of actually encrypting the desired plaintext at the time of encryption, we use the random oracle to program in the decryption key in such a way as to allow the adversary to decrypt and learn the same desired plaintext.

[0333] Lemma 19 Hybe is statistically indistinguishable from Hyb7.

[0334] Proof. In the scenario where the output from the ideal functionality out^ =1, we know that the adversary’s input u* matches with the vector w in at least d positions. For every honest party P , the adversary learns the plaintexts {y ί ;·} for every position i that has a match.

[0335] Now, in Hyb6, for every party P y, for every i E £] for which the adversary learns the plaintext, the exponent in the plaintext y ί ;· is of the form (ski ;· + r ί ;·) where r ί ;· is a secret sharing of 0. However, in Hyb7, the ski ;· values in the exponent are picked not to be secret shares of the threshold signing key share skjs, but picked in such a manner that the adversary recovers the same output. There is no other difference between the two hybrids and it is easy to observe that the difference between the two hybrids is only syntactic and hence they are statistically indistinguishable.

X. A PROTOCOL USING SECURE SKETCHES

[0336] Secure sketches are fundamental building blocks of fuzzy extractor. Combining any information theoretic secure sketch with any threshold oblivious PRF we construct a FTTG protocol. Due to the information theoretic nature of secure sketch this protocol has a distinct feature, that is the probability of succeeding in an offline attack stays the same irrespective of the computational power of the attacker or the number of brute-force trials. However, if a party initiates the protocol with a“close” measurement it recovers the actual template. Also, due to the same reason the template cannot be completely hidden. In other words, this

protocol has an inherent leakage on the template incurred by the underlying secure sketch instantiation. The first distinct characteristic is easily captured under the current definition setting the functions Lc to return the correct template and L Lm to return 1 on all inputs. To capture the leakage from template we introduce a new query to the ideal functionality.

[0337] Consider another extra parameter, the leakage function Ltmp:
® (0,1}b. On receiving a query of the form (“Leak on Template”, sid, Tt) from 5, first check whether there is a tuple (sid, Pi wL) recorded, if it is not found then do nothing, otherwise reply with (“Leakage”, (sid, 7,i, Ltmp(wi)) to S. The extended ideal functionality that is the same as ^FTTG except it has this additional query is called T^TTG

[0338] Now we provide a protocol below that realizes JpTTG with some reasonable leakage. For simplicity we only present the protocol only secure against semi-honest adversary. Adding generic NIZK proofs it is possible to make it secure against malicious adversary.

A. Setup

[0339] In the Setup phase, the following algorithms are executed by a trusted authority:

• Run the TOPRF setup ([[skop]], pp0P) <- TOPRF. Setup^^, n, t).

• Compute (ppTS,vkTS, sk s, ... , sk s) <- TS. Gen(l , n, k).

• For each i e [n], give (ppTS, vkTS, sk s, skPP, pp0P) to party R^.

B. Registration for Pt

[0340] Party P; interacts with everyone else to register its own template:

• Sample a random vector e lfq from the distribution W .

• Run the TOPRF encoding to generate c <- TOPRF. Encode(w, rand) for some randomness rand and send c to everyone.

• Each party F (J ¹ i ) replies back with their respective toprf evaluations

Z . = TOPRF. Eval(skj)P, c)

• Party on receiving at least t— i responses from parties in some set (say) 5, combine them to compute h ·= TOPRF. Combine(w, { j , Zj}jESu^ rand).

• Party P; computes keys Kj ·= H (h,f) for all j e [n].

• Party P t also computes s <- Sketch(w).

• Party P; sends the pair (Kj, s) to each party Py for all j e [n]

C. SignOn Phase

[0341] In the SignOn phase, let’s consider party that uses an input vector u and a message m on which it wants a token. picks a set S of t— 1 other parties {Pjjjes and Pk and interacts with them in the below four round protocol. The arrowhead in round 1 can denote that in this round messages are outgoing from party P;

• Round 1: (P^ ®) P^ contacts all parties in the set S with a initialization message that contains the message to be signed, m and an extractable commitment of its measurement Com(u).

• Round 2: (® P^) Each party P^ for j S sends the sketch s back to P^.

• Round 3: (Pj ®) On receiving all
executes the following steps:

(a) check if the s matches, if not then abort; otherwise go to the next step;

(b) perform reconstruction w := Re con (u, s);

(c) compute the TOPRF encoding with some fresh randomness rand c: = TOPRF. Encod e(w, rand) and send c to all P in S.

• Round 4: (® Pj) On receiving c from Pj each party F for j S executes the following steps:

(a) compute the TOPRF evaluation Zj \ = TOPRF. Eval(sk )P, c)

(b) compute the partial signature on m as Oy <- TS. Sign((sk s, m)

(c) encrypt the partial signature as cty <- en c(Kj, Oy)

(d) send the tuple (zj, cty) to P;

• Output Computation: On receiving all tuples Zj, cty) from parties (Py).

Jel J

(a) Pi combines the TOPRF as z <- TOPRF. Combine(w, { j , Zj}jES, rand);

(b) and then compute Kj := H(z,j );

(c) decrypt each cty using Kj to recover all the partial signatures cry.

(d) combine the partial signatures to obtain the token Token and output that.

I). Token Verification

[0342] Given a verification key vkTS, message m and token Token, the token verification algorithm outputs 1 if TS. Verify (vkTS, m, Token) outputs 1.

[0343] Correctness: The correctness of the protocol directly follows from the correctness of the underlying primitives.

E. Security Analysis

[0344] We show that the above protocol realizes the extended ideal functionality fFpTTG in presence of semi-honest], static adversary that corrupts up to t— 1 parties. In particular we formally prove (informal) Theorem 5 in this section. We prove that by constructing a polynomial time simulator as follows:

The Simulator 5:

[0345] During the Setup the simulator generates the signing key pairs of a threshold signatures and forward the secret-key shares to the corrupt parties. It also generates the TOPRF keys and forwards the key shares to the corrupt parties. It also generates the parameters for the extractable commitment and keeps the trapdoor secret.

[0346] After the registration is done the simulator makes a“Leak on Template” query to obtain some leakage on the template and using that simulate a fake sketch for the corrupt parties.

[0347] To simulate the sign on phase it works as follows depending on whether the initiator is corrupt or not. If the initiator is honest, then it can easily simulate the honest parties view by using their shares of TOPRF correctly and using a dummy measurement. If the initiator is corrupt, then it extracts the input measurement from the extractable commitment Com(u) and then makes a“Test Password” query to the ideal functionality pTTG with that. If u is close to the template then it successfully registers a signature which it then returns to the initiator in a threshold manner (can be done easily by adding secret-shares of zeros).

[0348] We argue that for any PPT adversary L , the above simulator successfully simulates a view in the ideal world that is computationally indistinguishable from the real world. From the above description notice a few results.

[0349] The registration is perfectly simulated due to access of the leakage on template query. Note that, without this access it would have been impossible to simulate this step. To demonstrate that we mention a simple attack when the attacker registers a dummy template, say string of 0S. Then there is no guarantee form secure sketch. So, without the leakage access, the simulator would not have any clue about the template. However, given the leakage access, the simulator can reconstruct the entire template (as there is no entropy and the information can be compressed within the allowed leakage bound).

[0350] The sign-on phase, which is initiated from the honest party is simulated correctly due to the template privacy guarantee provided by the underlying TOPRF. Note that, in this case the adversary does not learn the signature and hence in the simulation it is not required to register a correct signature with the ideal functionality. In the other case, when a corrupt party initiates the session, the extractability of the commitment scheme guarantees that, the extracted commitment is indeed the correct input of the attacker and then using the “Test Password” query the simulator is able to generate a signature correctly in case a close match is guessed correctly.

XL COMPUTER SYSTEM

[0351] Any of the computer systems mentioned herein may utilize any suitable number of subsystems. Examples of such subsystems are shown in FIG. 8 in computer apparatus 700.

In some embodiments, a computer system includes a single computer apparatus, where the subsystems can be components of the computer apparatus. In other embodiments, a computer system can include multiple computer apparatuses, each being a subsystem, with internal components. A computer system can include desktop and laptop computers, tablets, mobile phones and other mobile devices.

[0352] The subsystems shown in FIG. 8 are interconnected via a system bus 75. Additional subsystems such as a printer 74, keyboard 78, storage device(s) 79, monitor 76, which is coupled to display adapter 82, and others are shown. Peripherals and input/output (EO) devices, which couple to I/O controller 71, can be connected to the computer system by any number of means known in the art such as input/output (I/O) port 77 (e.g., USB, FireWire®). For example, I/O port 77 or external interface 81 (e.g. Ethernet, Wi-Fi, etc.) can be used to connect computer system 10 to a wide area network such as the Internet, a mouse input device, or a scanner. The interconnection via system bus 75 allows the central processor 73 to communicate with each subsystem and to control the execution of a plurality of

instructions from system memory 72 or the storage device(s) 79 (e.g., a fixed disk, such as a hard drive, or optical disk), as well as the exchange of information between subsystems. The system memory 72 and/or the storage device(s) 79 may embody a computer readable medium. Another subsystem is a data collection device 85, such as a camera, microphone, accelerometer, and the like. Any of the data mentioned herein can be output from one component to another component and can be output to the user.

[0353] A computer system can include a plurality of the same components or subsystems, e.g., connected together by external interface 81, by an internal interface, or via removable storage devices that can be connected and removed from one component to another component. In some embodiments, computer systems, subsystem, or apparatuses can communicate over a network. In such instances, one computer can be considered a client and another computer a server, where each can be part of a same computer system. A client and a server can each include multiple systems, subsystems, or components.

[0354] Aspects of embodiments can be implemented in the form of control logic using hardware circuitry (e.g. an application specific integrated circuit or field programmable gate array) and/or using computer software with a generally programmable processor in a modular or integrated manner. As used herein, a processor can include a single-core processor, multi-core processor on a same integrated chip, or multiple processing units on a single circuit board or networked, as well as dedicated hardware. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will know and appreciate other ways and/or methods to implement embodiments of the present invention using hardware and a combination of hardware and software.

[0355] Any of the software components or functions described in this application may be implemented as software code to be executed by a processor using any suitable computer language such as, for example, Java, C, C++, C#, Objective-C, Swift, or scripting language such as Perl or Python using, for example, conventional or object-oriented techniques. The software code may be stored as a series of instructions or commands on a computer readable medium for storage and/or transmission. A suitable non-transitory computer readable medium can include random access memory (RAM), a read only memory (ROM), a magnetic medium such as a hard-drive or a floppy disk, or an optical medium such as a compact disk (CD) or DVD (digital versatile disk), flash memory, and the like. The computer readable medium may be any combination of such storage or transmission devices.

[0356] Such programs may also be encoded and transmitted using carrier signals adapted for transmission via wired, optical, and/or wireless networks conforming to a variety of protocols, including the Internet. As such, a computer readable medium may be created using a data signal encoded with such programs. Computer readable media encoded with the program code may be packaged with a compatible device or provided separately from other devices (e.g., via Internet download). Any such computer readable medium may reside on or within a single computer product (e.g. a hard drive, a CD, or an entire computer system), and may be present on or within different computer products within a system or network. A computer system may include a monitor, printer, or other suitable display for providing any of the results mentioned herein to a user.

[0357] Any of the methods described herein may be totally or partially performed with a computer system including one or more processors, which can be configured to perform the steps. Thus, embodiments can be directed to computer systems configured to perform the steps of any of the methods described herein, potentially with different components performing a respective steps or a respective group of steps. Although presented as numbered steps, steps of methods herein can be performed at a same time or in a different order. Additionally, portions of these steps may be used with portions of other steps from other methods. Also, all or portions of a step may be optional. Additionally, and of the steps of any of the methods can be performed with modules, circuits, or other means for performing these steps.

[0358] The specific details of particular embodiments may be combined in any suitable manner without departing from the spirit and scope of embodiments of the invention.

However, other embodiments of the invention may be directed to specific embodiments relating to each individual aspect, or specific combinations of these individual aspects.

[0359] The above description of exemplary embodiments of the invention has been presented for the purpose of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form described, and many modifications and variations are possible in light of the teaching above. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications to thereby enable others skilled in the art to best utilize the invention in various embodiments and with various modifications as are suited to the particular use contemplated.

[0360] A recitation of“a”,“an” or“the” is intended to mean“one or more” unless specifically indicated to the contrary. The use of“or” is intended to mean an“inclusive or,” and not an“exclusive or” unless specifically indicated to the contrary.

[0361] All patents, patent applications, publications and description mentioned herein are incorporated by reference in their entirety for all purposes. None is admitted to be prior art.

XII. REFERENCES

[1] Google Pixel Fingerprint. https://support.google.com/pixelphone/answer/62852737hNen. Accessed on October 2, 2018. 1

[2] About Face ID advanced technology. https://support.apple.com/en-us/HT208l08.

Accessed on October 2, 2018. 1

[3] FIDO Alliance https://fidoalliance.org/. Accessed on October 2, 2018. 1

[4] Yevgeniy Dodis, Leonid Reyzin, and Adam Smith. Fuzzy extractors: How to generate strong keys from biometrics and other noisy data. In Christian Cachin and Jan Camenisch, editors, Advances in Cryptology - EUROCRYPT 2004, volume 3027 of Lecture Notes in Computer Science, pages 523-540, Interlaken, Switzerland, May 2-6, 2004. Springer, Heidelberg, Germany. 2

[5] Pratyay Mukheijee and Daniel Wichs. Two round multiparty computation via multikey fhe. In Annual International Conference on the Theory and Applications of Cryptographic Techniques, pages 735-763. Springer, 2016. 3, 5, 16

[6] Chris Peikert and Sina Shiehian. Multi-key fhe from lwe, revisited. In TCC, 2016. 3, 5, 16

[7] Zvika Brakerski and Renen Perlman. Lattice-based fully dynamic multi-key fhe with short ciphertexts. In CRYPTO, pages 190-213. Springer, 2016. 3, 5, 16

[8] Sanjam Garg and Akshayaram Srinivasan. Two-round multiparty secure computation from minimal assumptions. EUROCRYPT, 2018. 3, 5, 16

[9] Fabrice Benhamouda and Huijia Lin. k-round mpc from k-round ot via garbled interactive circuits. EUROCRYPT, 2018. 3, 5, 16

[10] Shashank Agrawal, Peihan Miao, Payman Mohassel, and Pratyay Mukherjee. Pasta: Password-based threshold authentication. IACR Cryptology ePrint Archive, 2018:885, 2018. 4, 8

[11] Prabhanjan Ananth, Saikrishna Badrinarayanan, Aayush Jain, Nathan Manohar, and Amit Sahai. From FE combiners to secure MPC and back. IACR Cryptology ePrint Archive, 2018:457, 2018. 5, 16

[12] Andrew Chi-Chih Yao. How to generate and exchange secrets (extended abstract). In 27th Annual Symposium on Foundations of Computer Science, Toronto, Canada, 27-29 October 1986, pages 162-167. IEEE Computer Society, 1986. 6

[13] Payman Mohassel, Mike Rosulek, and Ye Zhang. Fast and secure three-party

computation: The garbled circuit approach. In Indrajit Ray, Ninghui Li, and Christopher Kruegel, editors, Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, Denver, CO, USA, October 12-16, 2015, pages 591-602. ACM, 2015. 7, 8

[14] Pascal Paillier. Public-key cryptosystems based on composite degree residuosity classes. In International Conference on the Theory and Applications of Cryptographic Techniques, pages 223-238. Springer, 1999. 7, 10, 12, 24

[15] Taher El Gamal. A public key cryptosystem and a signature scheme based on discrete logarithms. In G. R. Blakley and David Chaum, editors, Advances in Cryptology,

Proceedings of CRYPTO’84, Santa Barbara, California, USA, August 19-22, 1984,

Proceedings, volume 196 of Lecture Notes in Computer Science, pages 10-18. Springer, 1984. 7, 41

[16] Alexandra Boldyreva. Threshold signatures, multi signatures and blind signatures based on the gap-diffie-hellman-group signature scheme. In International Workshop on Public Key Cryptography, pages 31-46. Springer, 2003. 9, 10, 33, 34

[17] Victor Shoup. Practical threshold signatures. In Bart Preneel, editor, Advances in Cryptology - EUROCRYPT 2000, International Conference on the Theory and Application of Cryptographic Techniques, Bruges, Belgium, May 14-18, 2000, Proceeding, volume 1807 of Lecture Notes in Computer Science, pages 207-220. Springer, 2000. 9, 33

[18] Oded Goldreich. The Foundations of Cryptography - Volume 2, Basic Applications. Cambridge University Press, 2004. 10

[19] Rafail Ostrovsky, Anat Paskin-Cherniavsky, and Beni Paskin-Chemiavsky. Maliciously circuit-private fhe. In International Cryptology Conference, pages 536-553. Springer, 2014. 10

[20] Ronald Cramer, Ivan Damgard, and Jesper B Nielsen. Multiparty computation from threshold homomorphic encryption. In International Conference on the Theory and

Applications of Cryptographic Techniques, pages 280-300. Springer, 2001. 12

[21] William Aiello, Yuval Ishai, and Omer Reingold. Priced oblivious transfer: How to sell digital goods. In Birgit Pfitzmann, editor, Advances in Cryptology - EUROCRYPT 2001, International Conference on the Theory and Application of Cryptographic Techniques, Innsbruck, Austria, May 6-10, 2001, Proceeding, volume 2045 of Lecture Notes in Computer Science, pages 119-135. Springer, 2001. 24

[22] Moni Naor and Benny Pinkas. Efficient oblivious transfer protocols. In S. Rao Kosaraju, editor, Proceedings of the Twelfth Annual Symposium on Discrete Algorithms, January 7-9, 2001, Washington, DC, USA., pages 448-457. ACM/SIAM, 2001. 24

[23] Chris Peikert, Vinod Vaikuntanathan, and BrentWaters. A framework for efficient and composable oblivious transfer. In David A. Wagner, editor, Advances in Cryptology -CRYPTO 2008, 28th Annual International Cryptology Conference, Santa Barbara, CA, USA, August 17-21, 2008. Proceedings, volume 5157 of Lecture Notes in Computer Science, pages 554-571. Springer, 2008. 24, 42

[24] Shai Halevi and Yael Tauman Kalai. Smooth projective hashing and two-message oblivious transfer. J. Cryptology, 25(1), 2012. 24

[25] Stanislaw Jarecki, Aggelos Kiayias, Hugo Krawczyk, and Jiayu Xu. TOPPSS:

costminimal password-protected secret sharing based on threshold OPRF. In Dieter

Gollmann, AtsukoMiyaji, and Hiroaki Kikuchi, editors, Applied Cryptography and Network Security - 15th International Conference, ACNS 2017, Kanazawa, Japan, July 10-12, 2017, Proceedings, volume 10355 of Lecture Notes in Computer Science, pages 39-58. Springer, 2017. 33, 43

[26] Robert J. McEliece and Dilip V. Sarwate. On sharing secrets and reed-solomon codes. Communications of the ACM, 24(9):583-584, 1981. 44

[27] Pierre- Alain Dupont, Julia Hesse, David Pointcheval, Leonid Reyzin, and Sophia Yakoubov. Fuzzy password-authenticated key exchange. In Jesper Buus Nielsen and Vincent Rijmen, editors, Advances in Cryptology - EUROCRYPT 2018, pages 393-424, Cham, 2018. Springer International Publishing. 43, 44

XIII. APPENDIX A: ADDITIONAL PRELIMINARIES

A. Threshold Oblivious Pseudorandom Functions

[0362] We now define the notion of a UC-secure Threshold Oblivious Pseudorandom Function taken almost verbatim from Jarecki et al. [25] We refer the reader to Jarecki et al.

[25] for the security definition and here only list the algorithms that form part of the primitive.

[0363] Definition 8 A threshold oblivious pseudo-random function T OPRF is a tuple of four PPT algorithms ( T OPRF. Setup, T OPRF. Encode, T OPRF. Eval, T OPRF. Combine ) described below.

• TOPRF. Setup(l;i, n, t) ® ([[sk]], pp).It generates n secret key shares sk1 sk2, ... , skn and public parameters pp. Share sk^ is given to party t.(pp will be an implicit input in the algorithms below.)

• TOPRF. Encode (x, p) =: c. It generates an encoding c of input x using randomness p.

• TOPRF. Eva^sk^, c) =: zt . It generates shares of the TOPRF value from an encoding. Party i computes the t-th share zt from c by running TOPRF. Eval with sk; and c.

• TOPRF. Combine(x, {(i, Zi)}ieS, p) =: (h or l).It combines the shares received from parties in the set S using randomness p to generate a value h. If the algorithm fails, its output is denoted by 1.

B. Robust Secret Sharing

[0364] We now give the definition of a Robust Secret Sharing scheme taken almost verbatim from Dupont et al. [26]

[0365] Definition 9 Let l e N, q be a l -bit prime, Fq be a finite field and n, t, m, r e N with t < r < n and m < r. An (n, t, r) robust secret sharing scheme (RSS) consists of two

probabilistic algorithms RSS. Share: Fq a† 'Fq and RSS. Recon: Fq ® Fq with the following properties:

• t-privacy: for any s, sa€2 E Fq, <A c [n] with \A\ < t, the projections cA of c

RSS. Share(s) and c' A of c' <- RSS. Share(sa€2) are identically distributed.

• r-robustness: for any s E Fq, <A c [n] with \A\ ³ r, any c output by RSS. Share(s), and any c such that cA = cA, it holds that RSS. Recon(c) = s.

[0366] In other words, an (n, t, r)-RSS is able to reconstruct the shared secret even if the adversary tampered with up to (na^’r) shares, while each set of t shares is distributed independently of the shared secret s and thus reveals nothing about it.

[0367] We say that a Robust Secret Sharing is linear if, similar to standard secret sharing, the reconstruction algorithm RSS. Recon only performs linear operations on its input shares.

[0368] Imported Lemma 1 The Shamir’s secret sharing scheme is a (b,
robust linear secret sharing scheme.

[0369] The above lemma is borrowed from instantiating Lemma 5 of Dupont et al. [26] using the work of McEliece and Sarwate [27], as described in Dupont et al. [26]