Episode 7, January 10, 2018
Functional Programming Languages and the Pursuit of Laziness with Dr. Simon Peyton Jones
When we look at a skyscraper or a suspension bridge, a simple search engine box on a screen looks tiny by comparison. But Dr. Simon Peyton Jones would like to remind us that computer programs, with hundreds of millions of lines of code, are actually among the largest structures human beings have ever built. A principle researcher at the Microsoft Research Lab in Cambridge, England, co-developer of the programming language Haskell, and a Fellow of Britain’s Royal Society, Simon Peyton Jones has dedicated his life to this very particular kind of construction work.
Today, Dr. Peyton Jones shares his passion for functional programming research, reveals how a desire to help other researchers write and present better turned him into an unlikely YouTube star, and explains why, at least in the world of programming languages, purity is embarrassing, laziness is cool, and success should be avoided at all costs.
- Microsoft Research Podcast: Visit our podcast page on Microsoft.com
- iTunes: Subscribe and listen to new podcasts each week on iTunes
- Email: Subscribe and listen by email
- Android: Subscribe and listen on Android
- Spotify: Listen on Spotify
- RSS feed
Simon Peyton Jones: I like to put it like this: when the limestone of imperative programming has worn away, the granite of functional programming will be revealed underneath!
Host: You’re listening to the Microsoft Research Podcast, a show that brings you closer to the cutting edge of technology research, and the scientists behind it. I’m your host, Gretchen Huizinga.
When we look at a skyscraper or a suspension bridge, a simple search engine box on a screen looks tiny by comparison. But Dr. Simon Peyton Jones would like to remind us that computer programs, with hundreds of millions of lines of code, are actually among the largest structures human beings have ever built. A principle researcher at the Microsoft Research Lab in Cambridge, England, co-developer of the programming language Haskell, and a Fellow of Britain’s Royal Society, Simon Peyton Jones has dedicated his life to this very particular kind of construction work. Today, Dr. Peyton Jones shares his passion for functional programming research, reveals how a desire to help other researchers write and present better turned him into an unlikely YouTube star, and explains why, at least in the world of programming languages, purity is embarrassing, laziness is cool, and success should be avoided at all costs.
That and much more on this episode of the Microsoft Research Podcast.
Host: Simon, welcome. You’re in the Programming Principles and Tools group at Microsoft Research in Cambridge. What do you spend most of your time doing there?
Simon Peyton Jones: Well, programming languages are the fundamental material out of which we build programs. When a builder builds a building, they can build out of bricks or out of straw or out of bananas or out of steel girders… And it makes a difference what you build out of, how ambitious your building can be and how likely it is to fall down. So, when developers write programs, the material that they use, the fabric of their programs – the programming language is super important to the robustness and longevity and reliability of their programs. So, programming language researchers study programming languages with the aim of building more robust building materials for developers to use.
Host: What role does research play in making good programming languages?
Simon Peyton Jones: Well, at first you might think that a programming language was – well, you just kind of throw it together. But actually, when you build a programming language, you want to be sure that you know what it means. That is to say, if you write a program you’d like it to be clear what the program means, what should happen when you execute it. That’s called its semantics. So, having a good way to specify in a rigorous way what that program means, what it does, is really important. So we need to find formalisms that we can write down rigorously what a program means. And then we need to implement it. So, if we’re going to build a compiler that, say translates a high-level language program into low-level machine code that’s going to run on your machine, you’d like to be confident that the compiler itself was correct. Right? That is, that it didn’t change the meaning of the program along the way. And it would do so consistently and reliably, day after day, on program after program after program. So, programming language research is about methods and tools and techniques and ideas and theories that will enable people to build programming language designs and implementations that will be robust. I wouldn’t say that programming languages tend to arise specifically from academics having clever ideas about what a language design might look like. They’re very often born in a much more random way, in the white heat of, “Oh, I just need to get something done!” And then retrospectively, programming language designers and researchers start to look closely at the design and try to improve it. So, there’s been, you know, dozens and dozens of papers about Java Script, for example. But Java Script was not designed initially by an academic.
Host: I’ve seen your talks and you use some slides that show, sort of, the trajectory of a lot of a different languages… You’ve suggested that there’s hundreds of languages. Most of them share the fate of an early death, with only 1 or 2 at the memorial service. And then there’s some that just resonate and take off. What does it take to “make it big” and is that something you should aim for?
Simon Peyton Jones: So, I think every computer scientist wants their language to be used. That’s one of the exciting things of working at Microsoft Research is that there’s a real chance your stuff might get used and have impact. So, we all want to make the world a better place. In programming language research, I would say though that the while everybody would aspire to have languages that have impact and are successful, it’s pretty random which ones are. The ones that are wildly successful are not necessarily the ones that are technically beautiful, or well-designed. They just hit some sweet spot at some particular moment. So, it’s a bit frustrating in a way. I think the… Haskell the language that I’ve been involved in, has been quite successful, but it could easily not have been. There’s a lot of randomness in the process.
Host: You mentioned two giants in computer science – Alan Turing and Alonzo Church – who came up with ideas at about the same time that have had a big impact on programming languages in two different steams. I think you talked about declarative and imperative languages…
Simon Peyton Jones: Yeah.
Host: Can you talk about that for a minute?
Simon Peyton Jones: So, my entire research life, ever since I first got excited about functional programming, at… when I was studying at Cambridge in 1979 or thereabouts. My entire research life has been following through the idea of what might purely functional programming mean? And if you look back a long way, as you said it does all date to Alonzo Church and Alan Turing, to pick just two giants from the literature. So, Turing said, “What is computation? What does it mean to compute something?” And he designed this thing that we now call the Turing Machine that was a very much step-at-a-time, do this, do this. Read a thing from the tape. Write that onto the tape. It was a very imperative machine. Meanwhile, at the very same time, actually in the same place, it was in Princeton, Alonzo Church was designing the Lambda Calculus, which is a, more of a kind of… seems much more abstract, algebraic thing. It’s like, rewriting expressions. And he discovered this tiny language in which expression rewrites could also, apparently, model computation. So, then it seemed obvious to us, is there anything you could compute with the Turing Machine that you couldn’t compute with Lambda Calculus or vice versa? And in the end, it turns out, very surprisingly, that these two notions of computation were the same. That is, anything you could do with the Turing Machine you could do with Lambda Calculus and vice versa. But, although they were equally powerful, in the sense of, what can you, in principle, do? they gave rise to very different language streams. So, Turing Machines, ultimately, you could see this is a bit of a retrospective justification, but you could see that… see Turing machines as the basis for all imperative languages, right? Do this, and then do that. Step-at-a-time computation, in which the program is a sequence of steps that you do, in sequence. Lambda Calculus is then the grandmother of functional programming in which you… a program executes by evaluation. You evaluate an expression. And it seems like completely different ways about thinking about your program. You have to think about programming in a completely different way. But nevertheless, they’re equally expressive. So, the interest for me has been, what would it mean to take this much less popular, but nevertheless universal programming power, functional programming, and really push it through to see what could that mean in a practical way for writing practical programs.
Host: So, talk about the difference between functional programming languages and other programming languages…
Simon Peyton Jones: The imperative approach, step-at-a-time programming, is what everybody’s used to. It’s what C is like, Java is like, C++ is like, Python is like, Perl is like, Ruby is like…you know. You name it, they’re mostly imperative programming languages. Functional programming is very different. It’s more like, everybody’s used a spreadsheet, and in a spreadsheet cell, you say, “Here is a formula that gives the value of a cell.” And you compute the value of a whole spreadsheet full of cells by computing each cell, perhaps one at a time, perhaps in parallel, but in data dependency order. If cell A1 depends on cell B3, you must compute cell B3 first, and then A1. But there’s no notion of “open a valve” or “launch the missiles” or “print something.” You can’t do that in the middle of a formula. It wouldn’t make sense. So, that’s functional programming, right? All of the… Excel’s built-in functions are functions. That is to say, they take some inputs and they produce some outputs. They have no side effects. And so, the surprising thing really, is that this purely functional approach to programming is in fact universal. If you think about it in a spreadsheet way, you think, “Well that’s good if you’re writing business plans, maybe, or computing my bank balance. But it couldn’t do anything useful.” Could you write a word processor in a spreadsheet? Well, probably not, right? But the insight of functional programming which stems right back to Church is that this programming paradigm is universal. You can do anything. And so, the, you know, functional programming language researchers have said, “Supposing we took that execution by evaluation idea and scaled it up? What would that mean?” And that’s what my whole research life has been about, really.
Host: Why did you get interested in that, I mean, at the very beginning?
Simon Peyton Jones: Because it’s like a radical and elegant attack on the entire enterprise of programming. Rather than just being, “Well, let’s just try doing this a slightly different way,” it’s like saying, “Let’s just attack programming from a completely different direction.” Moreover, it’s very close to mathematics. The whole idea of Lambda Calculus really grew out of logic, and there’s very beautiful dualities between programming on the one hand, and logic on the other. It’s called the Curry-Howard Isomorphism, in which you can view a, let’s say I have a function whose type is… it takes 2 integers and it produces an integer. Well, that type tells you something about the program. So, in a sense, it’s a weak theorem about the program. It tells you something about the program, but not everything. And indeed, you could regard the program as a proof of that theorem. So, the idea of “types as theorems” and “programs as proofs” is a very deep connection between logic, on the one hand, and programming, on the other. And this duality is very immediate in functional programming. But it’s rather distant in imperative programming. So, I tried to give you a sense for what got me excited about it. I just got excited about it because I thought, “It’s such a beautiful, simple, elegant way of thinking about the enterprise of programming. Let’s see if we can make it practical.”
Host: I love that. Now, how many people are in your – is camp the right word? Because you have people writing imperative languages all over the place. Is this something that needs to be evangelized, functional languages?
Simon Peyton Jones: Sure! Yes. So, you know, I like to put it like this, “When the limestone of imperative programming has worn away, the granite of functional programming will be revealed underneath.” So, imperative programming is very appealing. Don’t get me wrong, right? It’s sort of what real machines do. If you look at what a microprocessor does, it does loads and stores and adds and it sets things in registers that make valves go open or launches the missiles or prints something, right? Functional programming is a bit more abstract. So, that’s why it’s been sort of a minority pursuit for a long time. And over I guess, the 40-year period of my you know, adventure with functional programming, it’s gradually infected the mainstream more and more. But not too fast. That’s quite important, right? “Avoid success at all costs,” is one of my little mottos, right? Because if you’re too successful too quickly, you get sort of stuck and you can’t change anything anymore. But functional programming has become more and more influential. We can talk about ways in which that has happened.
Host: Well I do want to talk about Haskell, and what you’ve just said about the slow burn, the slow rise, and the benefits of not getting too successful too quickly, or dying an early death. But having the tenacity to stay there for long enough to start to grow and get more useful.
Simon Peyton Jones: Yes, so for me one of the glories and privileges of being a research computer scientist, is that you’re not just allowed, but actually paid, to work on a simple and elegant idea and to do so for, you know, 35 or 40 years. That’s amazing that society allows us to do that! So as far as Haskell goes, I mean you don’t just want to work on abstract ideas. You want to work on things that have impact. So, Haskell was developed by a group of research colleagues around the world, including myself. And our idea was just to embody the current consensus among ourselves about what purely functional programming actually was… let pure, lazy functional programming might look like. And at that time, it was very much a university enterprise. But by having an actual language and then turning it into an actual compiler that people could actually use to get their job done, and then extending the compiler so we could deal with input/output, and we could deal with foreign function interfaces, and talk to C and so forth, and we could develop the types that would actually be useful. Over time, we’ve turned Haskell into something that is useful for practical applications and now in fact it’s really quite widely used by developers in mostly small companies.
Host: So, let’s talk about laziness for a little bit. When I was growing up, that wasn’t a virtuous quality in our household but somehow lazy functional computing is a good thing. Why is that?
Simon Peyton Jones: Oh yes! Yes! So, at first it was just an amazingly clever and elegant thing. So, laziness is the idea that if you call a function in a normal imperative language or call-by value language, then before calling the function you’re going to evaluate the arguments to the values of those arguments and then you’ll pass them to the function. In a lazy functional language, you don’t evaluate the arguments before passing them to the function, you create recipes or suspensions or funcs which you pass to the function. And if it needs that argument, then it will evaluate it. So, you can write a function that might evaluate one or other but not both of its arguments. And that can be super important. Just think of a function like “if,” a conditional, where you don’t want to evaluate both the “then” branch and the “else” branch. So why did that happen? Well firstly it was because we could. Because the Lambda Calculus… programming the Lambda Calculus is an expression that you evaluate. And when you evaluate an expression, like if I evaluate the arithmetic expression 3+4 times 7+8, then I could evaluate the 3+4 first, or the 7+8 first. There isn’t an inherent order in expression evaluation, except that I must evaluate the 3+4 and the 7+8 before I multiply them, right? So, there’s some data dependencies, but there’s a lot of fluidity about evaluation order. And it’s the same with the Lambda Calculus. And it turns out… there’s a lot of study in the theoretical literature about evaluation order. And some of these evaluation orders, called Normal Order, naturally led to lazy evaluation. We thought, “Oh, that’s interesting. Oh, it just sort of naturally arises. What would that be good for?” At first, we just thought it was cool. And then John Hughes wrote this very interesting paper called “Why Functional Programming Matters,” in which he said, “Laziness is not just cool, it’s useful.” And he did that by describing how laziness gives you a new form of modularity. And his classic example was this: supposing I’m writing a program to play chess. Well one thing I might do is explore the tree of possible moves. He can move this way, then I could move that way, then you could move that way. There’s a big tree. Suppose I first generated a tree and then pruned it to figure out the best move. Well, that tree would be too big. So, usually, we would have to generate and prune at the same time. And John said, “Well, no. With lazy evaluation, you can generate in one piece of program and prune in another. And that gives you a new form of modularity.” So, that was really an interesting idea that’s worked out exactly that way.
Host: I love it. And I’ll probably use it. That laziness is not just cool, but useful.
Simon Peyton Jones: It is not just cool but useful, yes.
Host: How did laziness and purity come together?
Simon Peyton Jones: So, Haskell’s initial defining characteristic was that it was a lazy language. That’s what brought that particular group of people together, what we thought was exciting and cool. But in retrospect, I now think what was much more important was that laziness forced Haskell to be a pure language. By which I mean, in a call-by value functional language like ML or Lisp, if you wanted to print something it was too tempting to have a function, in quotes, which, when you call it, would print something as a side effect. That is, it wouldn’t just return well what would print return? Unit or 3 or something. But it would print something on the side. So, we couldn’t do that in a lazy language because we couldn’t predict the evaluation order well enough. So, laziness kept us pure. And purity was embarrassing for a long time, because you couldn’t really do much by way of input/output. You couldn’t print things or open files or launch missiles or sail the boat. So that forced us to invent what came to be called monadic input/output. And there was another classic example which Phil Wadler, my colleague at Glasgow, took ideas from the logic world. The theory of monads developed by various people. But he was particularly drawn on the work of Eugenio Moggi, who was very much a theorist. Phil Wadler wrote this wonderful paper comprehending monads in which he described monads as a programming idiom. And then he and I subsequently wrote a paper called ‘Imperative Functional Programming’ which showed how you can apply monadic programming to do input/output to affect the world. And that idea has been wildly infectious. That’s spread to all sorts of places. So, people now use the monadic thought pattern as a design idea for designing their programming languages or ways to… you could see it all over the place now. But it only happened because we were stuck with purity because we had laziness. It was another place where the sort of theory both helped the practice and also almost forced the practice, because we would have had to break with our principles too much to just have side effects. So, we were stuck with no side effects and were forced to invent this alternative way of going about things.
Host: Aside from your pioneering work in functional programming languages, a good part of what you do involves inspiring the next generation to take up the computer science baton and run with it. How have you gone about doing that? What have you done in the inspiration business for computer science?
Simon Peyton Jones: I started with this about 10 years ago when my children were at school. We would sit round the dinner table and they would tell me what they thought they did at school. And they had complete contempt for their lessons in ICT, Information and Communication Technology. And so, in talking to them, I was unable to make any connection between the subject that I thought was SO interesting, that I devoted my professional life to it, and this subject that they were learning in school. And that was different to, say, biology in which I think a biologist sitting round the dinner table with their children would be able to make a connection between the subject discipline that their children – even at primary school were learning at school – and the subject discipline that they thought was so interesting they devoted their professional life to it. So that seemed like a very big disconnect. The more people I talked to, the more people said, “Well, yeah, it doesn’t make sense, but that’s the way it is.” So, I helped start an outfit called Computing at School, which is based in the UK, but open to anybody anywhere in the world, whose sole mission was to try to say, “What might it mean to teach computer science as a subject discipline to school children? And to teach it at the same levels and for the same reasons that we teach natural science or mathematics.” That is, not because they’re going to become physicists or mathematicians, necessarily. A few will, but most will not. But because knowing some elementary principles about the physical or chemical or biological or digital world that surrounds them will make them more empowered, better-informed citizens. And that applies from primary school onwards. So, that was the mission of Computing at School.
Host: It’s now part of the core curriculum in the UK…
Simon Peyton Jones: That’s right. So, we were unexpectedly successful and we started in 2007-08. It was like, we felt as if we were at the bottom of a deep well, you know, shouting up towards the daylight, “Computer science is important, you know?” We got lucky. We wrote a curriculum. There was a review of the entire national curriculum, serendipitously, started by the then Conservative government. So, we were ready to make input to that curriculum debate. And in the end, we achieved almost all our policy goals. The new, national curriculum for computing, in England, pretty much says, in black and white, all children should learn the fundamental principles of computer science, and should do so from primary school onwards. So that’s amazing. And that came into force in 2014. But there’s a big challenge after that. It’s like when you scale one, apparently an unsurmountable mountain, what do you find behind it? Another, bigger mountain! In this case, it’s “how do we turn that aspirational idea into a tangible and living reality in every classroom in the land?” And that is a big challenge because while teachers are willing and committed and hardworking and able, they’re by and large not qualified in computer science. So, there’s a lot to do. There’s a lot to do. The state in this country is pockets of excellence but overall, it’s quite fragile.
Host: I think that in various stages, most countries in the world are facing the same issues within policy goals and implementation, and then how do you prepare teachers? We’re watching the UK, I think.
Simon Peyton Jones: Yeah, I think every, pretty much every nation in the world is thinking hard about what they teach their children about computing and how they go about teaching it. And I don’t think anybody has a monopoly on truth here. We’re all trying to figure it out as we go along.
Host: Do you think there’s any room in the research community for this kind of line of inquiry?
Simon Peyton Jones: Oh, tremendous! Yes, so both among computer scientists, who I think individually and collectively, computer scientists should be active in talking to their local school teachers and being on school boards of governors, because there’s a seismic change taking place. It’s like establishing an entirely new subject at school level. And what is that entirely new subject? Well, it’s called computer science, and who would know about that? Well, computer scientists. Particularly research computer scientists. So, we may not know how to teach. We may not know much about children, but we know the subject discipline, so we should get involved. But the other thing, at the research end that we need, is research in education, right? Because computer scientists know nothing about education. What is good pedagogy for computer science concepts? How might you teach computational thinking? What role does formative assessment play? How could you use, you know, hinge point questions to teach computing more effectively? When we teach programming does it make sense to start from a blank sheet of paper and say write a program to do X. Or should we instead spend a lot of time showing programs and saying please explain to your neighbor how this works. Or, here’s a program with a bug in it. Please find the bug and explain what’s wrong and fix it. There are a lot of different approaches to how you go about teaching. And we need educational research, in the end, backed by research evidence, to say which of these approaches works better.
Host: I think you’ve just given any number of listeners to this podcast some ideas about where they might want to go with research in the future if they have a passion for education and for computer science.
Simon Peyton Jones: Yeah, this is it. The intersection of education and computer science is a very rich area at the moment. And everybody wants to make a difference to the education that we give our children. Because many of us have children and want to see them succeed.
Host: Listen, let’s talk about another intersection that you’re really interested in: theory and practice.
Simon Peyton Jones: Computer science is unusual. Like, if you’re in biology, then just finding out something that is true is progress. So, novelty has value in its own right. That’s true of any natural science. In computer science, novelty has no value in and of itself. It’s too easy to make up new stuff. It’s a kind of like a fractal discipline. Everywhere you dig, you can make new details because we’re creating ideas out of nothing, out of pure thought stuff. Fred Brooks had this wonderful Newell Award lecture. It’s called the “The Computer Scientist as Toolsmith.” And he says, computer science and its theories only have value insofar as they demonstrate utility. So that’s a question asked about every paper, every research proposal I see. It’s not just ideas, but utility. So, to return to your question then about theory and practice, nevertheless, it’s much more fun if theory and practice live quite close together. If you can use a piece of theory to give practical results, you know, and make that crossover without, you know, bending the theory too much out of shape. And in functional programming that’s particularly true. So, for example, in the compiler that we built for Haskell, it’s called GHC. We were struggling, in the very early 90s, to think what should its intermediate language be like? Haskell’s very large. Source language we compile it into a small, intermediate language that we will then transform, optimize, transform and optimize, and then finally spit out machine code. What should that intermediate language be? We want it to be strongly typed itself. And I was worrying about, “Oh, where could we put the types and how would they live and how would they survive transformation?” And Phil Wadler said to me, “You know what, Simon? We should use System F.” And I sort of rocked back in my chair and thought, System F? I learned about that in an extremely theoretical seminar that I went to run by Samson Abramsky. I thought that was a purely theoretical interest. But it turned out we ended up directly implementing System F in GHC and it’s still there to this day. It’s a very pure embodiment of an idea that was developed solely in the theory context but turned out to have immediate practical utility. And that happens again and again in functional programming. I love that.
Host: I want to ask you about a couple of videos you’re in that have tens of thousands of downloads on YouTube, about how to write a research paper and how to give a research talk. Could you talk about that a little bit and why that was important to you and how that came about, that you became a video star on YouTube?
Simon Peyton Jones: Well, a lot of research is about communicating. As I say in these talks, no matter how brilliant you are, if you sit in a sealed room and have fantastic ideas but don’t tell anybody, then all you’ve done is heat up the universe. You’ve not really made it a better place. So, communication is key. I think I wrote the first of these that was about how to give a talk with John Hughes and John Launchbury after we were colleagues in the same department. We had been to a lot of research talks and started talking to each other about, “Couldn’t a lot of these talks be a lot better with some quite simple suggestions?” So, then we wrote them down in a SIGPLAN notices paper and I gave a talk about it. Then, subsequently I developed a talk about how to write a research paper, which has been extremely popular. And it was—and it arose in the same way. I just thought, I’m reading a lot of papers, I’m reviewing a lot of papers, and some quite simple ideas, I feel, could make them a lot better. And so, I thought that it was worth putting a bit of effort into trying to articulate or distill the techniques or ideas that I used in the hope they be useful to others. And to my astonishment they’ve seem to have been quite widely looked at, including by people in completely different disciplines, like psychology and history.
Simon Peyton Jones: It’s really strange; I get email from the most remarkable places. Yeah, I think if, in terms of citations or views or webpage hits, all the rest of this functional programming stuff is you know, it is nothing.
Simon Peyton Jones: That’s right, dwarfed. By this “how to write a research paper.”
Host: One of the most interesting things I heard you say is that computer programs are among the largest structures, or the largest things humans have ever built. And when we look at other structures they seem enormous to our eyes, but people don’t usually see the millions of lines of code behind a very small thing like a search engine box. Why do you tell that story and what’s important for us to understand about that?
Simon Peyton Jones: Well, because I think that by and large 99.9% of the population has no visceral, sort of, gut feel for just how complicated, remarkable and fragile our software infrastructure is. The search box looks simple but there’s the millions of lines of code… If you could see that in a way that you can see an aircraft carrier or some complicated machine that you can see inside, you’d have a more visceral sense for how amazing it is that it works at all, still less that it works so well. But you don’t get that sense from a computer program because it’s so tiny, right? All of my intellectual output for my entire life, including GHC, would easily fit on a USB stick. On that little thumbnail-size thing, I’ve just changed some 1s to 0s and some 0s to 1s and all the 1s and 0s were there to begin with. All I’ve done is change the state of some of them, as my entire professional output. And yet, these artifacts are so complex and so large they need entirely new techniques for dealing with them. So, if you think about how a large piece of software is built, we built it with layer upon layer of abstraction. We build libraries which hide their insides but provide an API that you can call. And you build another library on top of that and another library on top of that. And so, we manage the complexity of these gigantic systems by building abstractions. And learning how to describe those abstractions. I mean it’s another big part of what programming language people are interested in, right? So, why is that important? One, I would like people who are not computer scientists to have the idea that there is something rather amazing going on. And also, that it’s so complicated, it’s not surprising if it goes wrong occasionally. We shouldn’t place too much trust in it, right? It’s not magic. Sometimes I think people are too guilelessly trustworthy of computers. But also, for computer scientists or people thinking about, is this a field I’d like to be interested in? the idea of this whole remarkable wonderland of interesting complexity and creativity, right? Programming is one of the most creative disciplines in the world. Where you can create completely new things that nobody has ever built before. That’s something I’d like to get across to people.
Host: What’s the best thing about being a researcher, to you, and why would a young computer scientist want to follow in your footsteps in the field of research?
Simon Peyton Jones: Well for me, it’s been a great privilege just to be able to take one idea and follow it through. Take the idea of functional programming and run with it. And I, being able to do that, both at university for about 15 years, 17 years, and then subsequently at Microsoft for rather longer now, actually. Coming up on 20 years at Microsoft. And for me, it’s been this mixture of elegant, theoretical ideas that have direct, practical impact, has always been my powerful motivator. So, why might a young person want to be interested in computing, whether in research or not? Because you can build amazing things out of this pure thought stuff. Why might somebody want to go in research specifically? Well, typically if you’re working in industry you’re building amazing programs out of nothing, right? In research you build amazing ideas out of nothing.
Host: So, as we close, what thoughts would you share about your long life of research that would give the next generation, say, a vision for what might be next?
Simon Peyton Jones: So, I never had a long-term research plan. I never had a, “Oh, here are the 3 big things I’m going to do with my life and I’m on this 20-year trajectory to do it.” I was always just doing the next thing. So, I, I’m not really a very long-range planner. But I did have hold of one idea, this functional programming idea. I didn’t know how it would turn out. But I just found it fascinating. So, I would suggest to younger people, just start with something. I remember when I started as a researcher at the University College, London, I didn’t have a PhD. My head of department gave me some time off to do research. But I had no idea what to do. So, I just sat there with a sharp pencil and a blank sheet of paper, hoping for great ideas to come, which of course they didn’t. And then my colleague, John Washbrook, he said to me, “Simon, just do something. Anything. No matter how humble and simple. Just start something.” And so, I did. I wrote a little parser generator for a functional language called SASL. And that eventually turned into a research paper, as it happened. So, the wonderful thing about computer science is you can start almost anything, it’ll turn into something interesting. Don’t be too worried, just get started on something that interests you.
Host: Simon Peyton Jones, thanks for coming all the way over from England on Skype with us today.
Simon Peyton Jones: Oh, it’s been fun.
Host: To learn more about Dr. Simon Peyton Jones, and his work in the field of lazy, functional programming languages, visit Microsoft.com/research