Neural Representations for Program Analysis and Synthesis
- Chang Liu | UC Berkeley
Representing a program as a numerical vector (i.e., neural representation) enables handling discrete programs using continuous optimization approaches, and thus opens up new opportunities to tackle many traditional challenging programming language problems. In this talk, I will introduce two directions of my research along this direction. First, I have been working on synthesizing programs from specifications in different formats, such as natural language descriptions, input-output examples, and programs in a different language. I will present one of my latest results to synthesize a parsing program from input-output examples, which have been demonstrated difficult for existing program synthesis approaches. To tackle this problem, we propose a framework to learn a neural program operating a domain-specific non-differentiable machine, as well as a novel reinforcement learning-based algorithm to recover the execution trace for each input-output example. Our approach is demonstrated to achieve 100% accuracy over test inputs which are 100x longer than the training inputs.
Second, I will introduce my work to use deep learning approaches to solve the binary code similarity detection problem. Code similarity detection has many applications such as plagiarism and vulnerability detection; however, I will show that whether two pieces of code are similar cannot be defined as one common criteria for all applications. My work is the first to demonstrate that deep learning can be used to mitigate this issue so that the same approach can be applied for different application domains; also our learning-based approach outperforms the best heuristics-based approach by a large margin.
My research interest lies at the intersection of security, programming languages, and deep learning. I will briefly introduce my other efforts in this broad area and discuss potential future directions.
Speaker Details
Dr. Chang Liu is a Post-doctoral researcher at UC Berkeley. His main research focus is at the intersection of Deep Learning, Security, and Programming Languages. Dr. Liu’s recent work applies deep learning to program analysis and program synthesis problems to achieve the state-of-the-art results; he also contributes to deepen the understanding of security and privacy issues of deep learning systems by studying adversarial examples and data poisoning attacks. Dr. Chang receives his PhD from the Department of Computer Science at University of Maryland, College Park, where his advisors are Michael Hicks and Elaine Shi. His PhD work broadly expanded the investigation of the research direction of trace oblivious computation, which made significant impact on trusted hardware-based secure computation and cryptography-based secure multiparty computation. He is the recipient of John Vlissides Award (2015) and University of Maryland’s Outstanding Early Graduate Student Award (2014). His papers have received several best paper awards, including the best paper award of AISec 2017, the Best Paper Award in Applied Cyber Security Research, first place, CSAW 15, the best paper award of ASPLOS 2015, and the NSA Best Scientific Cybersecurity Paper Award, 2013. His ObliVM system won the HLI Award for Secure Multiparty Computation in the iDash Secure Genomics Analysis Competition (2015).
Watch Next
-
Efficient Secure Aggregation for Federated Learning
- Varun Madathil,
- Melissa Chase
-
-
Microsoft Research India - The lab culture
- P. Anandan,
- Indrani Medhi Thies,
- B. Ashok
-