Microsoft Research Blog

Microsoft Research Blog

The Microsoft Research blog provides in-depth views and perspectives from our researchers, scientists and engineers, plus information about noteworthy events and conferences, scholarships, and fellowships designed for academic and scientific communities.

New Meta-learning Techniques for Neural Program Induction

December 7, 2017 | By Rishabh Singh, Researcher

Much research in AI lately focuses on extending the capabilities of deep learning architectures: moving beyond simple classification and pattern recognition into the realm of learning algorithmic tasks, such as inductive programming. Building on our past work in neural program synthesis for learning string transformations in a functional language, our most recent work explores the challenges of training neural architectures for inducing programs in a more challenging imperative language (Karel) with complex control-flow.

Specifically, we are working to address the difficulty of program induction when a limited number of examples are available. Traditionally, neural program induction approaches rely on extremely large number of input-output examples—often on the order of hundreds of thousands to millions of examples—to achieve acceptable results. A portion of our research was directed at developing new techniques that can be used with significantly fewer examples. These new techniques, generally termed “portfolio adaptation” and “meta program induction,” rely on knowledge transfer from similar learning tasks to compensate for the lack of many input/output examples.

We compared the performance of four such techniques in a benchmark test over a range of datasets. For the test case, we used the Karel programming language, which introduces complex control flow including conditionals and loops in addition to a set of imperative actions. The neural architectures were provided with a fixed number of example input/output pairs (ranging from 1 to 100,000), and subsequently asked to derive the correct outputs from a set of new inputs (by implicitly learning to induce the corresponding Karel program). The figure below shows a couple of program induction tasks for the Karel dataset that our meta model could learn accurately from only 2 to 4 examples.

As detailed in our recently published paper, the results are quite promising: one of our meta induction techniques achieved an accuracy of nearly 40% with fewer than 10 examples (compared to 0% using the PLAIN example-driven technique alone).

The accompanying graph illustrates how the performance of each of the four models varies with the sample size. As can be seen, the newer techniques performed significantly better when fewer examples were provided. While the results converge as the number of examples increase, we believe that our newer techniques still present some advantage in that they require less computing resources—and less processing time—than the traditional model.

The four techniques are summarized below.

  • Plain Program Induction (PLAIN). Supervised learning is used to train a model which can perform induction on a single task, i.e., read in an input example for the task and predict the corresponding output. No cross-task knowledge sharing is performed.
  • Portfolio-Adapted Program Induction (PLAIN+ADAPT). Simple transfer learning is used to adapt a model that has been trained on a related task for a new task.
  • Meta Program Induction (META). A k-shot learning-style model is used to represent an exponential family of tasks, where the I/O examples corresponding to a task are directly conditioned on as input to the network. This model can generalize to new tasks without any additional training.
  • Adapted Meta Program Induction (META+ADAPT). The META model is adapted to a specific new task using round-robin hold-one-out training on the task’s I/O examples.

Continued work in this area will continue to refine these models and develop new techniques that enable neural networks to perform complex programming tasks in possibly full-fledged programming languages—without the need for infinitely large number of examples.

Related:

Up Next

Artificial intelligence, Human language technologies

Rapid Adaptation and Metalearning with Conditionally Shifted Neurons

The Machine Comprehension team at MSR-Montreal recently developed a neural mechanism for metalearning that we call conditionally shifted neurons. Conditionally shifted neurons (CSNs) adapt their activation values rapidly to new data to help neural networks solve new tasks. They do this with task-specific, additive shifts retrieved from a key-value memory module populated from just a […]

Tsendsuren Munkhdalai

Researcher

Artificial intelligence, Programming languages and software engineering

Neural-Guided Deductive Search: A best of both worlds approach to program synthesis

Program synthesis — automatically generating a program that satisfies a given specification — is a major challenge in AI. In addition to changing the way we design software, it has the potential to revolutionize task automation. End users without programming skills can easily provide input-output examples of the desired program behavior. The Flash Fill feature […]

Alex Polozov

Researcher

program tree

Artificial intelligence, Programming languages and software engineering

Deep Learning for Program Synthesis

By Rishabh Singh, Jacob Devlin, Abdelrahman Mohamed, and Pushmeet Kohli, Microsoft Research Despite the many advances in computing over the past decades, the actual process of writing computer software has not fundamentally changed — a programmer must manually code the exact algorithmic logic of a program in a step-by-step manner using a specialized programming language. Although programming languages […]

Microsoft blog editor