Microsoft Research Redmond Cryptography and Privacy Colloquium

Microsoft Research Redmond Cryptography and Privacy Colloquium

Up Next

CanDID: Can-Do Decentralized Identity with Legacy Compatibility, Sybil-Resistance, and Accountability

Harjasleen Malvai, UIUC
Friday, Sep 3, 2021 | 9:05 AM PDT | remote talk

Abstract

In this talk, I will present CanDID, a platform for practical, user-friendly realization of decentralized identity. Decentralized identity (DID) is the idea of empowering end users with management of their own credentials. While decentralized identity promises to give users greater control over their private data, it burdens them with management of private keys, creating a significant risk of key loss. Existing and proposed approaches also presume the spontaneous availability of a credential-issuance ecosystem, creating a bootstrapping problem. Further, they omit essential functionality, such as resistance to Sybil attacks and the ability to detect misbehaving or sanctioned users, in a privacy-preserving way.

CanDID addresses these challenges by issuing credentials in a user-friendly way that draws on data from existing, unmodified web service providers — in a secure and private way. Such legacy compatibility similarly enables CanDID users to leverage their existing online accounts for recovery of lost keys. Using a decentralized committee of nodes, CanDID provides strong confidentiality for user’s keys, real-world identities, and data, while preventing users from spawning multiple identities and allowing identification (and blacklisting) of sanctioned users.

Biography

Jasleen is a PhD student at UIUC, working with Prof. Andrew Miller. She is interested in the interplay between traditional cryptography and blockchains. Recently, Jasleen completed her Masters at Cornell University, prior to which she got a bachelor’s degree from Brown University.


Past speakers

CanDID: Can-Do Decentralized Identity with Legacy Compatibility, Sybil-Resistance, and Accountability | Harjasleen Malvai | Sep 3, 2021

CanDID: Can-Do Decentralized Identity with Legacy Compatibility, Sybil-Resistance, and Accountability

Harjasleen Malvai, UIUC

Friday, Sep 3, 2021 | 9:05 AM PDT | remote talk

Abstract

In this talk, I will present CanDID, a platform for practical, user-friendly realization of decentralized identity. Decentralized identity (DID) is the idea of empowering end users with management of their own credentials. While decentralized identity promises to give users greater control over their private data, it burdens them with management of private keys, creating a significant risk of key loss. Existing and proposed approaches also presume the spontaneous availability of a credential-issuance ecosystem, creating a bootstrapping problem. Further, they omit essential functionality, such as resistance to Sybil attacks and the ability to detect misbehaving or sanctioned users, in a privacy-preserving way.

CanDID addresses these challenges by issuing credentials in a user-friendly way that draws on data from existing, unmodified web service providers — in a secure and private way. Such legacy compatibility similarly enables CanDID users to leverage their existing online accounts for recovery of lost keys. Using a decentralized committee of nodes, CanDID provides strong confidentiality for user’s keys, real-world identities, and data, while preventing users from spawning multiple identities and allowing identification (and blacklisting) of sanctioned users.

Biography

Jasleen is a PhD student at UIUC, working with Prof. Andrew Miller. She is interested in the interplay between traditional cryptography and blockchains. Recently, Jasleen completed her Masters at Cornell University, prior to which she got a bachelor’s degree from Brown University.

Attestation and Isolation Mechanisms of AMD SEV | Luca Wilke | Aug 27, 2021

Attestation and Isolation Mechanisms of AMD SEV

Luca Wilke, University of Lübeck

Friday, Aug 27, 2021 | 9:05 AM PDT | remote talk

Abstract

The ongoing trend of moving data and computation to the cloud is met with concerns regarding privacy and protection of intellectual property. To alleviate these concerns, cloud providers offer execution in isolated and trusted environments, removing themselves from the trust base. Trusted Execution Environments are built on two main pillars: attestation and isolation. The variety in the implementation of these two features is broad and ranges from pure hardware or software enforced access protection, to sophisticated schemes with complex cryptographic protection. However, the downside of stronger isolation and protection usually comes in terms of decreased performance. In this talk, we will focus on the attestation and isolation mechanisms of AMD SEV and explore two attacks on SEV, highlighting why integrity protection is paramount to secure TEEs in untrusted environments.

Biography

Luca Wilke is a PhD student advised by Thomas Eisenbarth at University of Lübeck, Germany, where he also received a master’s degree on computer science in 2020. Luca’s research focuses on system security, especially Trusted Execution Environments like AMD SEV, Intel SGX, and Keystone.

On Banquet and Rainier - Smaller and Faster Signatures from MPC-in-the-Head | Daniel Kales | Aug 20, 2021

On Banquet and Rainier – Smaller and Faster Signatures from MPC-in-the-Head

Daniel Kales, TU Graz

Friday, Aug 20, 2021 | 9:05 AM PDT | remote talk

Abstract

Banquet is a signature scheme build on the MPC-in-the-Head paradigm and is intended to provide security against both classical and quantum attackers. The security against quantum attackers stems from the use of symmetric-key cryptography such as block ciphers and hash functions, without relying on certain number-theoretic hardness assumptions that are broken in the presence of quantum attackers. In contrast to similar designs such as Picnic, Banquet relies only on standardized symmetric-key primitives such as AES and SHAKE. Building on Banquet, we improve certain symmetric-key primitives to better fit signature schemes, and we also propose a new signature scheme by co-designing a proof system and a new block cipher. We show how to provably remove the need to include the key schedule of block ciphers. This simplifies schemes like Picnic and it also leads to the fastest and smallest AES-based signatures.

Second, we investigate a variant of AES with larger S-boxes we call LSAES, for which we can argue that it is very likely at least as strong as AES, further reducing the size of AES-based signatures. Finally, we present a new signature scheme, Rainier, based on a new block cipher called Rain combined with a Banquet-like proof system. To the best of our knowledge, it is the first MPCitH-based signature scheme which can produce signatures that are less than 5 KB in size; it also outperforms previous Picnic and Banquet instances in all performance metrics.

Biography

Daniel Kales is a Ph.D. student in the Cryptology and Privacy group at IAIK, TU Graz. His research focus is on secure multiparty computation and post-quantum signatures. He is a contributor to the Picnic post-quantum signature scheme, currently an alternate candidate in the NIST standardization process, and one of the main authors of the optimized implementation of Picnic. One of his additional areas of interest is high-performance implementations for cryptographic protocols such as zero-knowledge proofs based on multiparty computation techniques and private set intersection.

DORY: An Encrypted Search System with Distributed Trust | Emma Dauterman | Aug 6, 2021

DORY: An Encrypted Search System with Distributed Trust

Emma Dauterman, UC Berkeley

Friday, Aug 6, 2021 | 9:05 AM PDT | remote talk

Abstract

Efficient, leakage-free search on encrypted data has remained an unsolved problem for the last two decades; efficient schemes are vulnerable to leakage-abuse attacks, and schemes that eliminate leakage are impractical to deploy. To overcome this tradeoff, we reexamine the system model. We surveyed five companies providing end-to-end encrypted filesharing to better understand what they require from an encrypted search system. Based on our findings, we design and build DORY, an encrypted search system that addresses real-world requirements and protects search access patterns; namely, when a user searches for a keyword over the files within a folder, the server learns only that a search happens in that folder, but does not learn which documents match the search, the number of documents that match, or other information about the keyword. DORY splits trust between multiple servers to protect against a malicious attacker who controls all but one of the servers. We develop new cryptographic and systems techniques to meet the efficiency and trust model requirements outlined by the companies we surveyed. We implement DORY and show that it performs orders of magnitude better than a baseline built on ORAM. Parallelized across 8 servers, each with 16 CPUs, DORY takes 116ms to search roughly 50K documents and 862ms to search over 1M documents. The paper appeared at OSDI’20 and is joint work with Eric Feng, Ellen Luo, Raluca Ada Popa, and Ion Stoica.

Biography

Emma Dauterman is a third-year Ph.D. student in the UC Berkeley RISELab where she is advised by Raluca Ada Popa and Ion Stoica. She is broadly interested in building secure systems using cryptography. Emma completed her B.S. and M.S. at Stanford University in computer science where she was advised by David Mazieres. Her work is supported by a NSF GRFP fellowship and a Microsoft Ada Lovelace research fellowship.

Machine Learning Integrity in Adversarial Environments | Alina Oprea | July 23, 2021

Machine Learning Integrity in Adversarial Environments

Alina Oprea, Northeastern University in the Khoury College of Computer Sciences

Friday, July 23, 2021 | 9:05 AM PDT | remote talk

Abstract

Machine learning is increasingly being used for automated decisions in applications such as health care, finance, cyber security, and personalized recommendations. These critical applications require strong guarantees on the integrity of the machine learning  models used to make predictions. The area of adversarial machine learning studies the effect of adversarial attacks against machine learning models and aims to design robust defense algorithms. In this talk I will describe our work on poisoning attacks against machine learning at training time. I will introduce  several types of poisoning attacks, with different objectives and adversarial capabilities, and show their impact in multiple applications.  I will also discuss the challenges for designing machine learning models robust against poisoning attacks.

Biography

Alina Oprea is an Associate Professor at Northeastern University in the Khoury College of Computer Sciences. She joined Northeastern University in Fall 2016 after spending 9 years as a research scientist at RSA Laboratories. Her research interests are broadly in cyber security, with a focus on adversarial machine learning, AI for threat detection, cloud security, and applied cryptography. She is the recipient of the Technology Review TR35 award for research in cloud security in 2011 and the recipient of the Google Security and Privacy Award in 2019. Alina served as Program Committee co-chair of the IEEE Security and Privacy Symposium in 2020 and 2021, and is currently an Associate Editor for the ACM Transactions of Privacy and Security (TOPS) journal and the IEEE Security and Privacy Magazine.

Towards Effective and Efficient Privacy Preserving Machine Learning | Nitin Agrawal | July 16, 2021

Towards Effective and Efficient Privacy Preserving Machine Learning

Nitin Agrawal, University of Oxford

Friday, July 16, 2021 | 9:05 AM PDT | remote talk

Abstract

Machine learning has proven to be a great asset empowering human race, in recent times. However, there are areas such healthcare and finance that have benefited from all of this in a very limited sense, if at all. This could primarily be attributed to the sensitivity and privacy concerns over the data to be processed. Privacy preserving machine learning mitigates these privacy challenges through private computation over sensitive data. However, the process is not entirely trivial or without trade-offs.

In this talk, I will focus on our proposals around designing effective and efficient protocols for facilitating end-to-end privacy preserving machine learning, particularly for Neural Networks. I will also be talking about our user study focused on enhancing usability, efficiency, assisting design and ensuring equitability in privacy preserving computation frameworks at large [CHI’21]. This study took the form of semi-structured interviews with various stakeholders in the privacy preserving computation food chain. I intend to motivate design, development and further research in effective, efficient, and equitable privacy preserving machine learning in this talk, and more generally in my work.

Biography

Nitin is a doctoral student at the Department of Computer Science, University of Oxford. His area of research involves privacy aware machine learning. He works on designing privacy-preserving and equitable data driven inference and training architectures, primarily employing Secure Multi-Party Computation protocols and Federated Learning. Nitin is a commonwealth scholar and holds a B.Tech in information technology and mathematical innovations from University of Delhi.

Fawkes: Protecting Privacy against Unauthorized Deep Learning Models | Emily Wenger | June 25, 2021

Fawkes: Protecting Privacy against Unauthorized Deep Learning Models

Emily Wenger, University of Chicago

Friday, June 25, 2021 | 9:05 AM PDT | remote talk

Abstract

Today’s proliferation of powerful facial recognition systems poses a real threat to personal privacy. As Clearview.ai demonstrated, anyone can canvas the Internet for data and train highly accurate facial recognition models of individuals without their knowledge. We need tools to protect ourselves from potential misuses of unauthorized facial recognition systems. Our system, Fawkes, helps individuals inoculate their images against unauthorized facial recognition models. This talk will explain the techniques behind Fawkes and its performance. It will then discuss the future of Fawkes-like tools in the complicated fight for individual privacy.

Biography

Emily Wenger is a 3rd year PhD student in computer science at the University of Chicago. Her research explores the limitations, vulnerabilities, and privacy implications of deep neural networks. She is passionate about developing user-centric protections against intrusive machine learning systems. Her work has been reported on by the New York Times, MIT Tech Review, and the Verge, among others. Emily is the recipient of GFSD Fellowship, Harvey Fellowship, and University of Chicago Neubauer Fellowship. She holds a BS in mathematics and physics from Wheaton College (IL).

Online Tracking and Inferencing: Using Transparency to Obtain User Perspectives | Michelle Mazurek | June 9, 2021

Online Tracking and Inferencing: Using Transparency to Obtain User Perspectives

Michelle Mazurek, University of Maryland

Thursday, June 9, 2021 | 11:10 AM PDT | remote talk

Abstract

Online tracking and associated behavioral advertising is pervasive, but the extent and mechanics of tracking and inferencing are often opaque to users. In this talk, I will discuss a line of research investigating user perspectives on tracking and inferencing. We use transparency — in particular users’ real data from web browsing and Twitter use — as a prompt for increasing users’ understanding and measuring their opinions of different kinds of tracking and inferencing. Our work sheds light on what users do and don’t understand, as well as what they find useful, creepy, and unacceptable. Our work sheds light on lesser-known kinds of inferencing and ad targeting that go beyond interests, such as event targeting, follower lookalikes, and tailored audiences. Our results have implications for helping users exercise control (when possible) over the ways their information is used; for platforms that are concerned with user comfort; and for advocates and policymakers who are considering new transparency requirements as well as regulations on when and how users can be tracked.

Biography

Michelle Mazurek is an Associate Professor in the Computer Science Department and the Institute for Advanced Computer Studies at the University of Maryland, College Park, where she directs the Maryland Cybersecurity Center. Her research aims to understand and improve the human elements of security- and privacy-related decision making. Recent projects include examining how and why developers make security and privacy mistakes; investigating the vulnerability-discovery process; evaluating the use of threat-modeling in large-scale organizations; and analyzing how users learn about and decide whether to adopt security advice. Her work has been recognized with an NSA Best Scientific Cybersecurity Paper award and three USENIX Security Distinguished Paper awards. She was Program Chair for the Symposium on Usable Privacy and Security (SOUPS) for 2019 and 2020 and will be Program Chair for the Privacy Enhancing Technologies Symposium (PETS) for 2022 and 2023.

Full Database Reconstruction in Two Dimensions | Francesca Falzon | Nov 24, 2020

Full Database Reconstruction in Two Dimensions

Francesca Falzon, University of Chicago

Tuesday, Nov 24, 2020 | 9:10 AM PDT | remote talk

Abstract

In the past few years, we have seen multiple attacks on one-dimensional databases that support range queries. These attacks achieve full database reconstruction by exploiting access pattern leakage along with known query distribution or search pattern leakage. We are the first to go beyond one dimension, exploring this threat in two dimensions. We unveil an intrinsic limitation of reconstruction attacks by showing that there can be an exponential number of distinct databases that produce equivalent leakage. Next, we present a full database reconstruction attack. Our algorithm runs in polynomial time and returns a poly-size encoding of all databases consistent with the given leakage profile. We implement our algorithm and observe real-world databases that admit a large number of equivalent databases, which aligns with our theoretical results.

Biography

Francesca Falzon is currently a third year Ph.D. student in the Department of Computer Science at the University of Chicago. Prior to this, she completed her bachelors in mathematics at Rutgers University. She is advised by Prof. David Cash and is currently working on projects relating to encrypted databases and searchable encryption.

Side-channel-resistant Storage for SGX enclaves | Adil Ahmad | Nov 3, 2020

Side-channel-resistant Storage for SGX enclaves

Adil Ahmad, Purdue University

Tuesday, Nov 3, 2020 | 9:10 AM PDT | remote talk

Abstract

Nowadays cloud machines process highly confidential data, including personally identifiable information, healthcare, and financial data, but are challenging to secure, given software complexity, untrusted system administrators, and multi-tenancy. A promising solution to this problem is to use a hardware feature called Intel SGX, which secures confidential data within isolated execution environments, called enclaves. However, SGX does not secure memory-based side-channels including page faults, page access bits, and cache attacks. Prior research has shown that these side-channels reveal enclave access patterns, which leaks confidential enclave data (e.g., cryptographic keys).

In this talk, I will focus on designing side-channel-resistant yet efficient storage for SGX enclaves. First, I will briefly introduce my prior research in this area: Obliviate [NDSS18] and Obfuscuro [NDSS19], which employ Oblivious RAM (ORAM) to secure filesystem services and obfuscate enclave program execution, respectively. Then, I will show that scalability issues surrounding ORAM-based systems like Obliviate and Obfuscuro can limit their applicability. Building on these results, I will dive into the specifics of our latest research, TrustOre [CCS20], which outsources security-sensitive data to an external FPGA device, providing strong side-channel protection. My talk will discuss the challenges introduced by an external to the enclave trusted component (i.e., the FPGA device) and introduce new attestation, channel establishment, and communication designs to tackle these challenges. Furthermore, I will present TrustOre’s evaluation results which show that it incurs a significantly lower data access latency (almost two degrees better) and constant normalized throughput, unlike ORAM-based SGX systems. Finally, I will end my talk with a discussion on the open side-channel problems yet to be successfully addressed by TrustOre.

Biography

Adil Ahmad is a PhD student at Purdue University, advised by Dr. Pedro Fonseca (Purdue University) and Dr. Byoungyoung Lee (Seoul National University). His research interests are broadly in systems and security, with a focus on securing remote data and computation using hardware-assisted trusted computing solutions (e.g., Intel SGX and AMD SEV). His research has featured in top-tier computer security venues including NDSS, CCS, and USENIX Security.

Cryptε: Crypto-Assisted Differential Privacy on Untrusted Servers | Amrita Roy Chowdhury | Oct 13, 2020

Cryptε: Crypto-Assisted Differential Privacy on Untrusted Servers

Amrita Roy Chowdhury, University of Wisconsin-Madison

Tuesday, Oct 13, 2020 | 9:00 AM PDT | remote talk

Abstract

Differential privacy (DP) is currently the de-facto standard for achieving privacy in data analysis, which is typically implemented either in the ”central” or ”local” model. The local model has been more popular for commercial deployments as it does not require a trusted data collector. This increased privacy, however, comes at the cost of utility and algorithmic expressibility as compared to the central model.

In this talk, I will be presenting Cryptε, a system and programming framework that (1) achieves the accuracy guarantees and algorithmic expressibility of the central model (2) without any trusted data collector like in the local model. Cryptε achieves the ”best of both worlds” by employing two non-colluding untrusted servers that run DP programs on encrypted data from the data owners. In theory, straightforward implementations of DP programs using off-the-shelf secure multi-party computation tools can achieve the above goal. However, in practice, they are beset with many challenges like poor performance and tricky security proofs. To this end, Cryptε allows data analysts to author logical DP programs that are automatically translated to secure protocols that work on encrypted data. These protocols ensure that the untrusted servers learn nothing more than the noisy outputs, thereby guaranteeing DP for all Cryptε programs. Cryptε supports a rich class of DP programs that can be expressed via a small set of transformation and measurement operators followed by arbitrary post-processing. Further, I will talk about a novel performance optimization that leverages the fact that the output is noisy. As a result, Cryptε achieves performance that is practical for real-world usage.

Biography

Amrita Roy Chowdhury is a PhD student at the University of Wisconsin-Madison and is advised by Prof. Somesh Jha. She completed her Bachelor of Engineering in Computer Science from the Indian Institute of Engineering Science and Technology, Shibpur where she was awarded the President of India Gold Medal. Her research work focusses on exploring the associations between differential privacy and cryptography and proposing novel algorithms that bring to light the rich interrelations between the two areas, both in theory and practice.

Reducing Metadata Leakage from Encrypted Files and Communication with PURBs | Ludovic Barman | Sept 8, 2020

Reducing Metadata Leakage from Encrypted Files and Communication with PURBs

Ludovic Barman, EPFL

Tuesday, Sept 8, 2020 | 9:00 AM PDT | remote talk

Abstract

Most encrypted data formats leak metadata via their plaintext headers, such as format version, encryption schemes used, number of recipients who can decrypt the data, and even the recipients’ identities. This leakage can pose security and privacy risks to users, e.g., by revealing the full membership of a group of collaborators from a single encrypted e-mail, or by enabling an eavesdropper to fingerprint the precise encryption software version and configuration the sender used.

We propose that future encrypted data formats improve security and privacy hygiene by producing Padded Uniform Random Blobs or PURBs:ciphertexts indistinguishable from random bit strings to anyone without a decryption key. A PURB’s content leaks nothing at all, even the application that created it, and is padded such that even its length leaks as little as possible.

Biography

Ludovic is a 5th-year Ph.D. student at EPFL, working with prof. Jean-Pierre Hubaux and prof. Bryan Ford on the system aspects of security and privacy. His focus has been on anonymous communication systems, quantifying and reducing metadata from communications, and exploiting metadata in traffic-analysis attacks.

EnclaveDom: Privilege Separation for Large-TCB Applications in Trusted Execution Environments | Marcela Melara | Sept 1, 2020

EnclaveDom: Privilege Separation for Large-TCB Applications in Trusted Execution Environments

Marcela Melara, Intel Labs

Tuesday, Sept 1, 2020 | 9:00 AM PDT | remote talk

Abstract

Trusted executions environments (TEEs) such as Intel(R) SGX provide hardware-isolated execution areas in memory, called enclaves. Porting existing applications to TEEs often requires considerable refactoring efforts, as TEEs provide a restricted interface to standard OS features. To ease development efforts, TEE application developers often choose to run their unmodified application in a library OS container that provides a full in-enclave OS interface. Yet, prior research [1] suggests that this large-TCB development approach may leave sensitive in-enclave data exposed to potential bugs or vulnerabilities in third-party code imported into the application. Importantly, because the TEE libOS and the application run in the same enclave address space, previous work indicates that even the libOS management data structures (e.g. file descriptor table) may be vulnerable to attack, where in traditional OSes these data structures may be protected via privilege isolation.

In this talk, we present EnclaveDom, a privilege separation system for large-TCB TEE applications that partitions an enclave into tagged memory regions and enforces per-region access rules. EnclaveDom is implemented on Intel SGX using Memory Protection Keys (MPK) for memory tagging, and we evaluate EnclaveDom in an experimental integration with the Graphene-SGX library OS. Our prototype helps protect internal libOS management data structures against tampering by application-level code.

[1] Y. Shen, Y. Chen, K. Chen, H. Tian, and S. Yan. “To Isolate, or to Share? That is a Question for Intel SGX.” In Proc. APSys’ 18, Aug 2018.

Biography

Marcela Melara is a Research Scientist in the Security and Privacy Group at Intel Labs. Her current research focuses on trustworthy distributed systems and cloud computing security. Prior to joining Intel, she received her M.S.E. and PhD in Computer Science from Princeton University, and her Bachelor’s degree from Hobart and William Smith Colleges. She is a Siebel Scholar, a member of Phi Beta Kappa, and her research on CONIKS was awarded the Caspar Bowden PET Award in 2017. Marcela is also a mentor for Cientifico Latino and a volunteer for the Future Scientists Program.

Ghostor: Toward a Secure Data-Sharing System from Decentralized Trust | Yuncong Hu | Aug 25, 2020

Ghostor: Toward a Secure Data-Sharing System from Decentralized Trust

Yuncong Hu, UC Berkeley

Tuesday, Aug 25, 2020 | 9:00 AM PDT | remote talk

Abstract

Data-sharing systems are often used to store sensitive data. Both academia and industry have proposed numerous solutions to protect the user privacy and data integrity from a compromised server. Practical state-of-the-art solutions, however, use weak threat models based on centralized trust—they assume that part of the server will remain uncompromised, or that the adversary will not perform active attacks. We propose Ghostor, a data-sharing system that, using only decentralized trust, (1) hides user identities from the server, and (2) allows users to detect server-side integrity violations. To achieve (1), Ghostor avoids keeping any per-user state at the server, requiring us to redesign the system to avoid common paradigms like per-user authentication and user-specific mailboxes. To achieve (2), Ghostor develops a technique called verifiable anonymous history. Ghostor leverages a blockchain rarely, publishing only a single hash to the blockchain for the entire system once every epoch. We measured that Ghostor incurs a 4–5x throughput overhead compared to an insecure baseline. Although significant, Ghostor’s overhead may be worth it for security- and privacy-sensitive applications.

Based on joint work with Sam Kumar, and Raluca Ada Popa: https://eprint.iacr.org/2020/648.pdf

Biography

Yuncong Hu is a Ph.D. student at UC Berkeley, advised by Prof. Raluca Ada Popa and Prof. Alessandro Chiesa. He is primarily interested in the area of cryptography and security, including applied cryptography, decentralized systems, and zero-knowledge proofs. His research focuses on building systems based on decentralized trust and developing cryptographic tools for decentralized applications.

Secure computation for contact tracing | Ni Trieu | Aug 11, 2020

Secure computation for contact tracing

Ni Trieu, UC Berkeley/Arizona State University

Tuesday, Aug 11, 2020 | 9:00 AM PDT | remote talk

Abstract

Contact tracing is an essential tool in containing infectious diseases such as COVID-19. It enables users to search over released tokens in order to learn if they have recently been in the proximity of an infectious user. However, prior approaches have several weaknesses, including: (1) do not provide end-to-end privacy in the collection and querying of tokens, (2) do not utilize the huge volume of data stored in business databases and individual digital devices, or (3) impose heavy bandwidth or computational demands on client devices. In this talk, I will discuss the existing cryptographic attacks and privacy concerns in deploying contact tracing applications, how secure computation can tackle these problems. Moreover, I will describe our system design to improve the security guarantee and performance of the existing contact tracing frameworks. Along with this, I will present our new protocol for the delegated private intersection set cardinality that allows clients to delegate their contact tracing computation to cloud servers without privacy compromise.

Biography

I am an incoming assistant professor of computer science at Arizona State University this fall. I am currently a postdoc at UC Berkeley. I received my doctorate from Oregon State University. My research interests are in the area of cryptography and security, with a specific focus on secure computation and its applications such as private set intersection, private database queries, and privacy-preserving machine learning.

Auditing Differentially Private Machine Learning: How Private is Private SGD? | Matthew Jagielski | Aug 4, 2020

Auditing Differentially Private Machine Learning: How Private is Private SGD?

Matthew Jagielski, Northeastern University

Tuesday, Aug 4, 2020 | 9:00 AM PDT | remote

Abstract

We investigate whether Differentially Private SGD offers better privacy in practice than what is guaranteed by its state-of-the-art analysis. We do so via novel data poisoning attacks, which we show correspond to realistic privacy attacks. While previous work (Ma et al., arXiv 2019) proposed this connection between differential privacy and data poisoning as a defense against data poisoning, our use as a tool for understanding the privacy of a specific mechanism is new. More generally, our work takes a quantitative, empirical approach to understanding the privacy afforded by specific implementations of differentially private algorithms that we believe has the potential to complement and influence analytical work on differential privacy.

Biography

Matthew Jagielski is a 4th year PhD student at Northeastern University, advised by Cristina Nita-Rotaru and Alina Oprea. He works at the intersection between machine learning, security, and privacy, with work appearing at venues including USENIX Security, IEEE S&P, and ICML. He is also an avid runner, biker, and swimmer.

Analyzing the Leaky Cauldron via Membership Inference Attacks | Bargav Jayaraman | July 28, 2020

Analyzing the Leaky Cauldron via Membership Inference Attacks

Bargav Jayaraman, University of VirginiaTuesday, July 28, 2020 | 9:10 AM PDT | remote talk

Abstract

Machine learning models can leak sensitive information about the training data, ranging from revealing the presence or absence of a particular individual to leaking latent properties of the training data set. Differential privacy provides strong theoretical guarantees that bound the information leaked by a model, and can be achieved by adding randomized noise during the training process. The noise magnitude is controlled by the privacy loss budget parameter such that smaller privacy loss budgets lead to high privacy but at the potential loss of model utility. In practice, implementations tend to use high privacy loss budgets to obtain useful models. I’ll first report on experiments we have done using membership inference attacks to measure the privacy leakage of various differentially private training methods. Next, I will talk about how we improve the prior membership inference attacks by showing a general method to adaptively select confidence threshold for determining if a candidate is a member.Finally, I will discuss more realistic scenarios where the prior sampling probability is imbalanced and report the evaluation results in such scenarios.

Biography

Bargav Jayaraman is a PhD student at the Department of Computer Science, University of Virginia. He works in the area of privacy preserving machine learning and is advised by Professor David Evans. His dissertation focuses on evaluating privacy leakage of differentially private machine learning.

Silent OT Extension and More via Pseudorandom Correlation Generators | Lisa Kohl | July 21, 2020

Silent OT Extension and More via Pseudorandom Correlation Generators

Lisa Kohl, Technion

Tuesday, July 21, 2020 | 9:00 AM PDT | remote

Abstract

Secure multiparty computation can often utilize a trusted source of correlated randomness, such as random oblivious transfer (OT), to achieve better asymptotic and concrete efficiency. A natural tool to securely realize this source of randomness with only a small amount of communication is a pseudorandom correlation generator (PCG). A PCG is a deterministic function that stretches short correlated seeds into long instances of the target correlation, thereby allowing two or more parties to generate correlated randomness using a cheap, one-time interaction followed by only “silent” local computation.

In this talk, I will introduce the concept of PCGs and show how to construct a PCG for OT based on the Learning Parity with Noise (LPN) assumption. I will further outline how this approach yields the first efficient protocol for generating n random OTs with o(n) communication and the first concretely efficient 2-round OT extension protocol of any kind. Finally, I will give a brief outlook on how to obtain efficient PCGs for more general correlations such as oblivious linear evaluations over large fields from a variant of the LPN assumption.

Based on joint works with Elette Boyle, Geoffroy Couteau, Niv Gilboa, Yuval Ishai, Peter Rindal and Peter Scholl.

Biography

Lisa Kohl is currently a postdoctoral researcher at Technion hosted by Yuval Ishai. In 2019 she completed her PhD at Karlsruhe Institute of Technology under the supervision of Dennis Hofheinz. During her PhD she spent eight months in the FACT center at IDC Herzliya for a research visit with Elette Boyle. She wrote her master’s thesis in the Cryptology Group at CWI Amsterdam headed by Ronald Cramer. Her research interests lie in the foundations of cryptography with a particular focus on exploring new directions in secure computation.

Security and noise growth in homomorphic encryption | Rachel Player | July 14, 2020

Security and noise growth in homomorphic encryption

Rachel Player, Royal Holloway, University of LondonTuesday, July 14, 2020 | 9:00 AM PDT | remote talk

Abstract

In November 2018, the HomomorphicEncryption.org consortium published the Homomorphic Encryption Security Standard. The Standard recommends several sets of Learning with Errors (LWE) parameters that can be selected by application developers to achieve a target security level. However, these parameter sets do not always reflect implementation choices. For example, all known implementations for bootstrapping for the CKKS, BFV and BGV schemes use a sparse secret and a large power-of-two ring dimension.

In the first part of the talk, we contextualise the selection of parameters for security in LWE-based cryptographic schemes by illustrating choices made in the ongoing NIST post-quantum standardisation process. We then narrow our focus to the homomorphic encryption setting and explore the security of sparse-secret LWE parameter sets, taking into account hybrid attacks.

In the second part of the talk, we turn our attention to noise growth in homomorphic encryption, which is essential for choosing parameters to ensure correctness and optimal performance. We evaluate the effectiveness of heuristic worst-case noise analyses in homomorphic encryption and find that this inherently leads to a gap between the heuristic estimate of the noise and the observed noise in practice. We also report on an updated comparison of the BGV and FV schemes and find that FV slightly outperforms BGV, even for large plaintext moduli, well beyond the crossover point reported in prior work of Costache and Smart.

The talk draws on the following joint works:

Martin R. Albrecht, Benjamin R. Curtis, Amit Deo, Alex Davidson, Rachel Player, Eamonn W. Postlethwaite, Fernando Virdia, Thomas Wunderer. Estimate all the {LWE,NTRU} schemes! SCN 2018.

Benjamin R. Curtis, Rachel Player. On the Feasibility and Impact of Standardising Sparse-secret LWE Parameter Sets for Homomorphic Encryption. WAHC@CCS 2019.

Anamaria Costache, Kim Laine, Rachel Player. Evaluating the effectiveness of heuristic worst-case noise analysis in FHE. To appear at ESORICS 2020.

Biography

Dr Rachel Player is a lecturer in the Information Security Group at Royal Holloway, University of London. Previously, she was a postdoc in the same group, and even before that she was a postdoc at LIP6 in Paris. Rachel’s main research interests are in post-quantum cryptography, especially in the design and cryptanalysis of lattice-based cryptographic schemes. Rachel is interested in Privacy Enhancing Technologies and she has worked extensively in the area of applied homomorphic encryption. In 2019, Rachel was honoured by ISRG with the Radiant Award for Advancing Internet Security. In Spring 2016, Rachel was an intern in the Cryptography group at MSR where she contributed to Microsoft SEAL. More information can be found at: https://rachelplayer.github.io

Reproducible Codes and Cryptographic Applications | Edoardo Persichetti | August 14, 2019

Reproducible Codes and Cryptographic Applications

Edoardo Persichetti, Florida Atlantic University

Wednesday, August 14, 2019 | 1:00 PM PDT | 99/1927 von Neumann

Abstract

In this talk I will present a work in progress on structured linear block codes. The investigation starts from well-known examples and generalizes them to a wide class of codes that we call reproducible codes. These codes have the property that they can be entirely generated from a small number of signature vectors, and consequently admit matrices that can be described in a very compact way. I will show some cryptographic applications of this class of codes and explain why the general framework introduced may pave the way for future developments of code-based cryptography.

Biography

Edoardo Persichetti is currently Assistant Professor in the Department of Mathematical Sciences at Florida Atlantic University. Before moving to Florida, he was Postdoc (Adiunkt Naukowy) in the Cryptography and Data Security Group at Warsaw University in Poland. He completed his PhD in Mathematics in late 2012 at University of Auckland, New Zealand under the supervision of Steven Galbraith.

His research interests are public-key cryptography (post-quantum, provable security) and number theory (mainly coding theory), with a particular focus on code-based cryptography.

Dr. Persichetti is co-author of 4 submissions to NIST’s Post-Quantum Standardization, and participates actively in the community, serving regularly as peer-reviewer for many internationally-recognized conferences and journals.

Fully Homomorphic NIZK and NIWI Proofs | Apoorvaa Deshpande | August 6, 2019

Fully Homomorphic NIZK and NIWI Proofs

Apoorvaa Deshpande, Brown University

Tuesday, August 6, 2019 | 1:15 PM PDT | 99/1927 von Neumann

Abstract

In this work, we define and construct fully homomorphic non-interactive zero knowledge (FH-NIZK) and non-interactive witness-indistinguishable(FH-NIWI) proof systems. Specifically, given non-interactive zero-knowledge (NIZK) or witness-indistinguishable (NIWI) proofs for different statements, we give a framework to homomorphically evaluate on them to compute proofs for new arbitrarily inferred statements. The security guarantee along with soundness and ZK or WI (respectively) is that of unlinkability; an inferred proof should be indistinguishable from a fresh proof for a combined statement.

Our first result, under the Decision Linear Assumption (DLIN), is a fully homomorphic NIZK proof system in the common random string model. Our more surprising second result (under a new decisional assumption on groups with bilinear maps) is a fully homomorphic NIWI proof system that requires no setup.[Joint work with Prabhanjan Ananth, Yael Kalai and Anna Lysyanskaya]

Biography

Apoorvaa Deshpande is a PhD student at Brown University, advised my Prof. Anna Lysyanskaya. Her research has been around zero-knowledge proofs and its applications to the blockchain space.

Resilience and Vulnerability of Ring-LWE Cryptosystems to Leakage | Dana Dachman-Soled | June 27, 2019

Resilience and Vulnerability of Ring-LWE Cryptosystems to Leakage

Dana Dachman-Soled, University of Maryland

Thursday, June 27, 2019 | 10:30 AM PDT | 99/1915 Hopper

Abstract

One of the foremost avenues for viable post-quantum cryptography is lattice-based cryptography, which primarily involves cryptosystems based on (variants of) the learning with errors (LWE) assumption. While formal leakage resilience guarantees have been well-studied for LWE-based cryptosystems since 2009, the techniques do not carry over to the more efficient, Ring-LWE setting, which is the preferred setting in practice.

In this talk, we will describe information-theoretic and computational approaches for establishing the leakage resilience of Ring-LWE-based cryptosystems. We also present a type of leakage attack known as a partial key exposure attack on Ring-LWE. This attack exploits the algebraic structure of the Ring-LWE setting and is not applicable in the standard LWE setting.

Based on joint work with: Huijing Gong, Mukul Kulkarni and Aria Shahverdi

Biography

Dana Dachman-Soled is an assistant professor in the Department of Electrical and Computer Engineering at the University of Maryland, College Park and is affiliated with the Maryland Cybersecurity Center. She is the recipient of an NSF CAREER award, a Ralph E. Powe Junior Faculty Enhancement award, and a University of Maryland Research and Scholarship (RASA) award. Prior to joining University of Maryland, Dana spent two years as a postdoc at Microsoft Research New England. Before that, she completed her PhD at Columbia University.

PASTA: PASsword-based Threshold Authentication | Peihan Miao | June 24, 2019

PASTA: PASsword-based Threshold Authentication

Peihan Miao, UC Berkeley

Monday, June 24, 2019 | 11:00 AM PDT | 99/1927 von Neumann

Abstract

We introduce and formalize a new notion of password-based threshold token authentication, which protects password-based authentication against single point of failures. Specifically, we distribute the role of a single server among n servers and allow any t servers to collectively verify clients’ passwords and generate tokens, while no t-1 servers can forge a valid token or mount offline dictionary attacks. We then introduce PASTA, a general framework wherein clients can sign on using a two-round (optimal) protocol that meets our strong security guarantees.Our experiments show that the overhead of protecting secrets and credentials against breaches in PASTA, i.e. compared to a naive single-server solution, is extremely low (1-5%) in the most likely setting where client and servers communicate over the internet. The overhead is higher in case of MAC-based tokens over a LAN (though still only a few milliseconds) due to public-key operations in PASTA. We show, however, that this cost is inherent by proving a symmetric-key only solution impossible.

Based on joint work with Shashank Agrawal, Payman Mohassel, and Pratyay Mukherjee: https://eprint.iacr.org/2018/885.pdf.

Biography

Peihan Miao just received her PhD in Cryptography from UC Berkeley (advised by Prof. Sanjam Garg) and will join VISA Research as a research scientist in July. She is primarily interested in the area of secure computation. Her research focuses on building cryptographic tools for achieving the optimal computation, communication, and round complexity in secure computation, aiming for bridging the gap between theory and practice.

Pathfinding in isogeny graphs and computing endomorphism rings | Travis Morrison | June 21, 2019

Pathfinding in isogeny graphs and computing endomorphism rings

Travis Morrison, University of Waterloo

Friday, June 21, 2019 | 1:30 PM PDT | 99/1915 Hopper

Abstract

A large enough quantum computer could run Shor’s algorithm, rendering elliptic curve cryptography insecure. To prepare for a “post-quantum” world, NIST is currently administering a process to standardize one or more cryptographic protocols which should be secure against an attacker with a quantum computer. One submission, SIKE (based on Jao and de Feo’s SIDH) relies on the hardness of pathfinding in supersingular isogeny graphs as an assumption for its security. Supersingular isogeny graphs first appeared in cryptography in the work of Charles, Goren, and Lauter, who used these graphs to build a cryptographic hash function. There is a rich interplay between the geometry of isogenies and the arithmetic of endomorphism rings of elliptic curves, as shown in work of Waterhouse and Kohel, suggesting that one avenue for solving the pathfinding problem is to compute supersingular endomorphism rings.

In this talk, I will discuss heuristic reductions between the pathfinding and endomorphism ring problems. Then, I will discuss recent work with Eisentraeger, Hallgren, Leonardi and Park in which we give several new algorithms for computing supersingular endomorphism rings. I will also discuss to what extent these algorithms may impact the security of SIDH or the CGL isogeny hash.

Biography

Travis Morrison is a post-doctoral fellow working with David Jao and Michele Mosca in the Institute for Quantum Computing at the University of Waterloo. He received his B.A. at the University of California, Santa Cruz. During the summer of 2017, he was an intern at Microsoft Research and was mentored by Kristin Lauter. He received his PhD from the Pennsylvania State University under the supervision of Kirsten Eisentraeger.

Multidimensional scalar point multiplication algorithms | Koray Karabina | June 18, 2019

Multidimensional scalar point multiplication algorithms

Koray Karabina, Florida Atlantic University

Tuesday, June 18, 2019 | 1:15 PM PDT | 99/1927 von Neumann

Abstract

Elliptic curve groups have been a popular choice in the implementation of traditional and post-quantum cryptographic schemes, including Diffie-Hellman type key-exchange protocols and digital signature algorithms. While some of these applications require to perform multidimensional scalar point multiplication (d-Mul), some of the others enjoy extra speed-ups from utilizing d-Mul with some precomputation or efficiently computable endomorphisms. When the underlying scalars in d-Mul are secret, it is crucial to follow a regular pattern of operations in the implementation because otherwise scalars may be recovered through side channel attacks. Therefore, it has been of great interest to design efficient and secure scalar multiplication algorithms. In this talk, I will give a survey of d-Mul algorithms with some recent theoretical and implementation results.

Biography

Koray Karabina’s research interests are in the areas of design of new cryptographic primitives and algorithms, efficient implementation, and cryptanalysis. He received his Ph.D. degree in Combinatorics and Optimization from the University of Waterloo (UW) in 2010. Koray is currently working as an associate professor in the Department of Mathematical Sciences at Florida Atlantic University (FAU), and he is a member of the Center for Cryptology and Information Security at FAU. Koray’s current research in biometrics is supported by the National Science Foundation, and his research in elliptic curve cryptography is supported by the Army Research Office and the National Institute of Standards and Technology.

SIKE in Hardware for IoT | Reza Azarderakhsh | June 17, 2019

SIKE in Hardware for IoT

Reza Azarderakhsh, Florida Atlantic University

Monday, June 17, 2019 | 3:00 PM PDT | 99/1915 Hopper

Abstract

Supersingular Isogeny Key Encapsulation (SIKE) is one of the candidates submitted to NIST’s post-quantum cryptography standardization process. SIKE is the only key exchange mechanism based on isogenies on elliptic curves submitted to NIST standardization process which offers the smallest key-size in comparison to the other candidates. In this talk, we will start with the basics of isogeny-based cryptography and underlying finite field arithmetic computations and describe the SIKE algorithm. We will discuss hardware architectures for secret kernel computations for isogeny maps based on various curves. We will also provide higher level arithmetic overview for isogeny evaluations and isogeny computations which are required for SIKE computations. More specifically, we will provide discussions about isogeny graphs for both Alice and Bob and show how they walk on supersingular isogeny graphs and end up getting a shared secret. We will also provide hardware architectures and illustrate how higher level protocols could get translated and implemented in RTL and programmed into the FPGAs. We will provide performance and area usage results and compare them to the other quantum-safe candidates in similar platforms. Finally, applications will be discussed for IoT world and feasibility of SIKE will be discussed.

Biography

Dr. Reza Azarderakhsh is an assistant professor and I-SENSE Fellow in the Department of Computer Science and Engineering at Florida Atlantic University. He received his PhD. in Computer Engineering from Western University, Canada. After that, he was an post-doc research fellow at the Center for Applied Cryptographic Research, Department of Combinatorics and Optimization at the University of Waterloo which he is currently affiliated as a supervisor member of CryptoWorks21 program there. Dr. Azarderakhsh serves as an Associate Editor of IEEE Transactions on Circuits and Systems (TCAS-I) Cryptographic Engineering track. He is an author/coauthor of more than 80 journals and conference papers in the area of applied cryptography and cryptographic engineering. His research has been supported by NSF, NIST, ARO, NAVY, and Texas Instruments among others. He has developed several algorithms and architectures for classical and post-quantum cryptography including elliptic curve cryptography, isogeny-based cryptography, and lattice-based cryptography systems. He was a recipient of the prestigious NSERC Post-Doctoral Research Fellowship and the Texas Instruments Faculty Award.

Homomorphic Sorting with Better Scalability | Gizem Cetin | June 13, 2019

Homomorphic Sorting with Better Scalability

Gizem Cetin, WPI

Thursday June 13, 201910:30 AM PDT | 99/1927 von Neumann

Abstract

Homomorphic sorting is an operation that blindly sorts a given set of encrypted numbers without decrypting them (thus, there is no need for the secret key). In this work,we propose a new, efficient, and scalable method for homomorphic sorting of numbers: polynomial rank sort algorithm. To put the new algorithm in acomparative perspective, we provide an extensive survey of classical sorting algorithms and networks that are not directly suitable for homomorphic computation. We also include, in our discussions, two of our previous algorithms specifically designed for homomorphic sorting operation: direct sort and greedy sort, and explain howthey evolve from classical sorting networks. We theoretically show that the new algorithm is superior in terms of multiplicative depth when compared with all other algorithms. When batched implementation is used, the number of comparisons is reduced from quadratic to linear in the set size, provided that the number of slots is larger than or equal to the number of elements in the set. Our software implementation results confirm that the new algorithm is several orders of magnitude faster than many methods in the literature. Also, the polynomial sort algorithm scalesbetter than the fastest algorithm in the literature to the best our knowledge although for small sets the execution times are comparable. The proposed algorithm is amenable to parallel implementation as most time consuming operations in the algorithm can naturally be performed concurrently.

Biography

Gizem S. Cetin completed her BS(’12) and MS(’14) degrees in Computer Science and Engineering, at Sabanci University. She was a member of Vernam Lab and recently got her PhD degree from the ECE Department at Worcester Polytechnic Institute. She spent the Summer 2016 as a research intern at Microsoft Research with the Cryptography Group in Redmond, WA. Gizem’s research interest includes software design of cryptosystems, somewhat, leveled and fully homomorphic encryption with a special focus in designing efficient algorithms for FHE/SWHE applications and their software implementations.

Lattice Attacks for Variants of LWE | Shi Bai | June 4, 2019

Lattice Attacks for Variants of LWE

Shi Bai, Florida Atlantic University

Tuesday June 4, 2019 | 1:15 PM PDT | 99/1927 von Neumann

Abstract

The learning with errors (LWE) problem introduced by Regev(STOC’05) is one of the fundamental problems in lattice-based cryptography. It has been used extensively as a security foundation, for public-key encryption, signatures, fully homomorphic encryption (FHE), pseudo-random functions (PRF) and many others. One standard strategy to solve the LWE problem is to reduce it to a unique SVP(uSVP) problem via Kannan’s embedding and then apply a lattice reduction to solve the uSVP problem. In this talk, we will discuss and compare various lattice algorithms for solving LWE, and then give some concrete estimates for breaking various variants of LWE (e.g. generic, small secrets, restricted samples). In the end, we will discuss some recent developments on algorithms for solving LWE.

Biography

Shi Bai is an assistant professor in the Department of Mathematical Sciences at Florida Atlantic University. His research fields are in cryptography and computational number theory. He received his PhD degree in Computer Science from the Australian National University in 2012. He worked as postdoctoral researchers at the University of Auckland (2013-2014) and then at the Ecole Normale Superieure Lyon (2015-2016). He is currently interested in number theoretical algorithms in the cryptanalysis for lattice-based cryptography.

Towards Developer-Centered Cryptographic Ecosystems | Sasha Fahl | May 23, 2019

Towards Developer-Centered Cryptographic Ecosystems

Sasha Fahl, Leibniz University Hannover

Thursday, May 23, 2019 | 2:00 PM PDT | 99/1915 Hopper

Abstract

Potentially dangerous cryptography errors are well documented in many applications. Conventional wisdom suggests that many of these errors are caused by cryptographic Application Programming Interfaces (APIs) that are too complicated, have insecure defaults, or are poorly documented. To address this problem, researchers have created several cryptographic libraries that they claim are more usable.

In this talk, I will present some of my team’s work that addresses specific challenges around making cryptography easier to use for developers. I will discuss multiple studies with developers that focused on different aspects of cryptographic ecosystems, including API usability, the use of documentation, and a study on supporting developers inside their IDE.

Biography

Sascha Fahl is a Professor for Usable Security and Privacy at Leibniz University Hannover in Germany. Previously, he was Professor at Ruhr University Bochum, Germany, held the chair for Information Security at Leibniz University Hannover and was an independent research group leader at CISPA, Saarland University. Prof. Fahl studied Computer Science at Philipps University Marburg and received a Ph.D. in Computer Science. He worked with the Chrome Security team and was a researcher at Fraunhofer FKIE in Bonn. His research won the NSA’s best Scientific Cybersecurity Paper Competition, he received a Google Faculty Research Award and is a recipient of the Heinz Maier-Leibnitz Prize 2018.

Efficient Lattice Trapdoor Sampling and Its Applications | Yuriy Polyakov | April 23, 2019

Efficient Lattice Trapdoor Sampling and Its Applications

Yuriy Polyakov, NJIT

Wednesday, April 23, 2019 | 1:00 PM PDT | 99/1927 von Neumann

Abstract

Lattice trapdoors are used in a wide array of lattice-based cryptographic primitives, including identity-based encryption, attributed-based encryption, functional encryption, program obfuscation, and many others. The main computational bottleneck for many of these cryptographic primitives is lattice trapdoor sampling, which is the key reason why many of these constructions have been impractical. This talk presents efficient Residue-Number-System (RNS) algorithms for lattice trapdoor sampling, and discusses their implementation in the PALISADE library. Our state-of-the-art performance results for several applications based on these algorithms are also presented. These include Gentry-Peikert-Vaikuntanathan digital signature and identity-based encryption, ciphertext-policy and key-policy attribute-based encryption, and obfuscation of conjunction and general branching programs. Our results suggest that some of these implementations are already practical.

Biography

Yuriy Polyakov is an Associate Research Professor at the NJIT Department of Computer Science & Cybersecurity Research Center. His primary research interests are applied lattice-based cryptography, fully homomorphic encryption, program obfuscation, secure genome analysis, and secure computer systems. He is a co-founder and core contributor to the open-source PALISADE Lattice Cryptography Library. Prior to joining NJIT, he worked as a research scientist at MIT CSAIL and held several senior-level industrial positions. His prior research dealt with time series analysis using statistical physics methods and mathematical modeling of membrane technology processes. He received M.S. in Computer Science from NJIT (2003), PhD in Chemical Engineering from Moscow Polytechnic University (2004), and D.Sc.(Dr. Habil.) in Physics & Mathematics from Karpov Institute of Physical Chemistry (2007). He is a recipient of the Moscow Mayor’s Young Scientist Award (2005).

Strong Asymmetric PAKE based on Trapdoor CKEM | Tatiana Bradley | April 17, 2019

Strong Asymmetric PAKE based on Trapdoor CKEM

Tatiana Bradley, University of California, Irvine

Wednesday, April 17, 2019 | 11:00 AM – 12:00 PM PDT | 99/1915 Hopper

Abstract

Password-Authenticated Key Exchange (PAKE) protocols allow two parties that share a password to establish a shared key in a way that is immune to offline attacks. Asymmetric PAKE (aPAKE) adapts this notion to the common client-server setting, where the server stores a one-way hash of the password instead of the password itself, and server compromise allows the adversary to recover the password only via the (inevitable) offline dictionary attack. Most aPAKE protocols, however, allow an attacker to pre-compute a dictionary of hashes, thus instantly learning the user password on server compromise.

Recently, Jarecki, Krawczyk, and Xu formalized a (Universally Composable) strong aPAKE (saPAKE) notion, which requires the password hash to be salted so that all dictionary attack computation must happen after the server compromise leaks the salt and the salted hash. They propose a protocol called OPAQUE, which requires 3 rounds, 3-4 exponentiations per party, and an interactive One-More Diffie-Hellman assumption in the Random Oracle Model (ROM) for security.

We propose an alternative UC strong asymmetric PAKE construction based on a novel use of the encryption+SPHF paradigm for UC PAKE design. Though it also uses ROM, our protocol compares favorably to OPAQUE: it has only two rounds, uses fewer exponentiations, and is based on different assumptions: q-Strong Diffie-Hellman (q-SDH), and the assumptions required to show that the Boneh-Boyen signature function is a Salted Tight One-Way Function (STOWF).

We formalize a UC model for STOWF and analyze the Boneh-Boyen function as UC STOWF in the generic group model (GGM) and ROM. Our saPAKE also uses an implicit-statement trapdoor Conditional Key Encapsulation Mechanism (CKEM), a generalization of SPHF, which forms an extractable commitment to the hashed statement in ROM. This strengthening of SPHF allows for a UC (sa)PAKE design where only the client commits to its password, and only the server performs an SPHF, compared to the standard UC PAKE design paradigm where the encrypt+SPHF subroutine is used symmetrically by both parties.

This talk is based on joint work with Stanislaw Jarecki and Jiayu Xu that is currently under review at CRYPTO 2019.

Biography

Tatiana Bradley is a PhD candidate in Computer Science at UC Irvine. She received a Bachelors degree in Mathematics from Scripps College in 2015.Her research focus is applied cryptography, and, since joining Stanislaw Jarecki’s group at UCI, her research has focused on designing password-authenticated protocols. She has interned at IBM Research in Zurich, and will be interning at Google this summer.

Fully Bideniable Interactive Encryption | Oxana Poburinnaya | Mar 19, 2019

Fully Bideniable Interactive Encryption

Oxana Poburinnaya, Boston University

Tuesday, March 19, 2019 | 1:30 PM – 2:45 PM PDT | 99/1919 Turing

Abstract

While standard encryption guarantees secrecy of the encrypted plaintext only against an attacker that has no knowledge of the communicating parties’ keys and randomness of encryption, deniable encryption [Canetti et al., Crypto’96] provides the additional guarantee that the plaintext remains secret even in face of entities that attempt to coerce (or bribe) the communicating parties to expose their internal states, including the plaintexts, keys and randomness. To achieve this guarantee, deniable encryption equips the parties with faking algorithms which allow them to generate fake keys and randomness that make the ciphertext appear consistent with any plaintext of the parties’ choice. To date, however, only partial results were known: Either deniability against coercing only the sender, or against coercing only the receiver [Sahai-Waters, STOC ‘14] or schemes satisfying weaker notions of deniability [O’Neil et al., Crypto ‘11].

In this paper we present the first fully bideniable interactive encryption scheme, thus resolving the 20-years-old open problem. Our scheme also provides an additional and new guarantee: Even if the sender claims that one plaintext was used and the receiver claims a different one, the adversary has no way of figuring out who is lying the sender, the receiver, or both. This property, which we call off-the-record deniability, is useful when the parties don’t have means to agree on what fake plaintext to claim, or when one party defects against the other. Our protocol has three messages, which is optimal [Bendlin et al., Asiacrypt’11], and needs a globally available reference string. We assume subexponential indistinguishability obfuscation (IO) and one-way functions.

Biography

Oxana is a 6th year PhD student at Boston University, working in theoretical cryptography with Professor Ran Canetti. Prior to joining BU, she studied mathematics at Moscow State University.

Videos