Выбрать главу

I don’t know quite what it was that turned me on about this. I found it completely inspirational. It’s partly, I suppose, because, being interested in hardware, it felt like this is a way you could implement the lambda calculus. Because the lambda calculus doesn’t look like it’s an implementation mechanism at all. It’s a bit of a mathematical way of thinking, a bit remote from the machine. This S-K stuff looks as if you could just run it and indeed you can.

Seibeclass="underline" So, you had a sense that, OK, I’ll just build a machine that has S and K hardwired and then all I’ve got to do is compile things to a series of S and K ops.

Peyton Jones: In fact that’s exactly what my friends did. William Stoye and Thomas Clarke and a couple others, built this machine, SKIM, the SKI Machine, which directly executed S and K. For some reason I wasn’t directly involved in that project. But at the time there was this feeling developing. John Backus’s paper, called, “Can Programming Be Liberated from the von Neumann Style” was extremely influential at the time. It was his Turing Award lecture and he was this guy who had invented Fortran saying, in effect, “Functional programming is the way of the future.”

Furthermore, he said, “Maybe we should develop new computer architectures to execute this stuff on.” So this very high-level endorsement of this research area meant we cited that paper like crazy. And so SKIM was an example of such a thing. We thought maybe this unusual way of going about executing, or at least thinking about, programs turns into a completely different sort of computer architecture. That phase lasted from about 1980 to 1990—radical architectures for functional programming. I now regard it as slightly misdirected but nevertheless it was terribly exciting.

Lazy evaluation was another huge motivating factor. With the benefit of hindsight I now think lazy evaluation is just wonderful but at that time it was sort of pivotal. Lazy evaluation is this idea that functions don’t evaluate their arguments. Again the motivating factor was something to do with it being beautiful or elegant and unusual and radical.

That’s kind of good for catching the imagination: it looks as if this might be a way of thinking about programming in a whole new way. Rather than just putting one more brick in the wall, we can build a whole new wall. That’s very exciting. I was strongly motivated by that. Was it just that it was a neat trick? In some ways I think neat tricks are very significant. Lazy evaluation was just so neat and you could do such remarkable different things that you wouldn’t think were possible.

Seibeclass="underline" Like what?

Peyton Jones: I remember my friend John Hughes wrote a program for me. For a project I was doing two implementations of the lambda calculus and comparing their performance, so John gave me some test programs. One of them was a program that computed the decimal expansion of e to arbitrary precision. It was a lazy program—it was rather beautiful because it produced all the digits of e.

Seibeclass="underline" Eventually.

Peyton Jones: Eventually, that’s right. But it was up to the consumer. You didn’t have to say how many digits to produce in advance. You just got given this list and you kept hauling on elements of the list and it wouldn’t give you another digit until it had spent enough cycles computing it. So that’s not something that’s very obvious to do if you’re writing a C program. Actually you can do it with enough cleverness. But it’s not a natural programming paradigm for C. You can almost only do it once you’ve seen the lazy functional program. Whereas John’s program was just about four or five lines. Amazing.

Seibeclass="underline" Other languages have since special-cased that kind of computation with, for example, generators in Python or something where you can yield values. Was there something that made you say, “Aha; there are lots of things that could be fruitfully looked at as an infinite series of computations from which we just want to draw answers until we’re tired of it?” As opposed to saying, “Oh, that’s an interesting technique for certain problems but not the basis for everything.”

Peyton Jones: I think at this stage I wasn’t as reflective as that. I just thought it was so cool. And fun. I think it’s important to do what you find motivating and interesting and follow it. I just found it very inspiring. I don’t think I really thought there are deep principled reasons why this is the way to do programming. I just thought it was a rather wonderful way to do programming. I like skiing. Well, why do I like skiing? Not because it’s going to change the world—just because it’s a lot of fun.

I now think the important thing about laziness is that it kept us pure. You’ll have seen this in several of my talks probably. But I actually really like laziness. Given a choice I’d choose a lazy language. I think it’s really helpful for all kinds of programming things. I’m sure you’ve read John Hughes’s paper, “Why Functional Programming Matters.” It’s probably the earliest articulate exposition of why laziness might be important in more than a cute way. And his main story is that it helps you write modular programs.

Lazy evaluation lets you write generators—his example is generate all the possible moves in your chess game—separately from your consumer, which walks over the tree and does alpha-beta minimaxing or something. Or if you’re generating all the sequence of approximations of an answer, then you have a consumer who says when to stop. It turns out that by separating generators from consumers you can modularly decompose your program. Whereas, if you’re having to generate it along with a consumer that’s saying when to stop, that can make your program much less modular. Modular in the sense of separate thoughts in separate places that can be composed together. John’s paper gives some nice examples of ways in which you can change the consumer or change the generator, independently from each other, and that lets you plug together new programs that would have been more difficult to get by modifying one tightly interwoven one.

So that’s all about why laziness is a good thing. It’s also very helpful in a very local level in your program. You tend to find Haskell programmers will write down a function definition with some local definitions. So they’ll say “f of x equals blah, blah, blah where…” And in the where clause they write down a bunch of definitions and of these definitions, not all are needed in all cases. But you just write them down anyway. The ones that are needed will get evaluated; the ones that aren’t needed won’t. So you don’t have to think, “Oh, goodness, all of these sub expressions are going to be evaluated but I can’t evaluate that because that would crash because of a divide by zero so I have to move the definition into the right branch of the conditional.”

There’s none of that. You tend to just write down auxiliary definitions that might be needed and the ones that are needed will be evaluated. So that’s a kind of programming convenience thing. It’s a very, very convenient mechanism.

But getting back to the big picture, if you have a lazy evaluator, it’s harder to predict exactly when an expression is going to be evaluated. So that means if you want to print something on the screen, every call-by-value language, where the order of evaluation is completely explicit, does that by having an impure “function”—I’m putting quotes around it because it now isn’t a function at all—with type something like string to unit. You call this function and as a side effect it puts something on the screen. That’s what happens in Lisp; it also happens in ML. It happens in essentially every call-by-value language.