Showing posts with label java. Show all posts
Showing posts with label java. Show all posts

The Y Combinator in Arc and Java

I was recently reading The Little Schemer, a very intersting book about how to think in Scheme. The book uses a unique question-response style to present simple concepts such as cons and recursion, as well as complex concepts such as using lambda functions to build arithmetic out of fundamental axioms, and deriving the Y Combinator.

As you're probably aware, the Y Combinator is a fixed-point combinator, which lets you build recursion out of an anonymous, non-recursive function. The tricky part of building recursion without naming is that if you can't name a function, how can the function call itself? The Y Combinator is a way to do this.

In Arc, it's easy to write a recursive factorial function:

(def fact (n)
  (if (is n 0) 1
      (* n (fact (- n 1)))))
Note that def assigns the name fact to this function, allowing it to recursively call itself. But without def, how can the function call itself?

Fixed point as an infinite stack of function calls

Let's make a function fact-gen that if passed the factorial function as input, returns the factorial function. (Note that fn is Arc's equivalent of lambda.)
(def fact-gen (fact-in)
 (fn (n)
  (if (is n 0) 1
      (* n (fact-in (- n 1))))))
This may seem rather useless, returning the factorial function only if you already have it. Since we don't have a factorial function to pass in, we'll pass in nil (technically bottom). This at least gets us started:
arc> ((fact-gen nil) 0)
1
arc> ((fact-gen nil) 1)
Error: "Function call on inappropriate object nil (0)"
We can compute factorial of 0, but factorial of 1 hits nil and dies. But we can take our lame factorial function, pass it in to fact-gen and get a slightly better factorial function that can compute factorials up to 1:
arc> ((fact-gen (fact-gen nil)) 1)
1
We can repeat this to get an even more useful factorial function:
arc> ((fact-gen (fact-gen (fact-gen (fact-gen (fact-gen nil))))) 4)
24
If we could have an infinite stack of fact-gen calls, then we would actually have the factorial function. But we can't do that. Or can we?

The fixed point of a function f is a value x for which f(x) = x. We can apply the same idea to our infinite stack of fact-gen. Since applying fact-gen to the infinite stack one more time makes no difference, (fact-gen infinite-stack) = infinite-stack, so the infinite stack is a fixed point of fact-gen. Thus, the fixed-point of fact-gen is the factorial function.

If taking the fixed point of an algorithmic function seems dubious to you, a full explanation is available in Chapter 5 of the 1300 page tome Design Concepts in Programming Languages; trust me, it's all rigorously defined.

The fixed-point combinator

So how do you find the fixed point without an infinite stack of functions? That's where the fixed-point combinator (also known as the Y combinator) comes in. The fixed-point combinator Y takes a function and returns the fixed point of the function. That is, applying the function once more makes no difference:
y f = f (y f)
You may wonder how the Y combinator computes an infinite stack of functions. The intution is it computes a finite stack that is just big enough for the argument.

The fixed-point combinator in Haskell

In Haskell, you can use the above definition of the Y combinator directly:
y(f) = f (y f)
fact f n = if (n == 0) then 1 else n * f (n-1)
y(fact) 10
This is a bit of a "cheat", since the definition of the y combinator takes advantage of Haskell's pre-existing recursion, rather than providing recursion from scratch. Note that this only works because of lazy evalation; otherwise the definition of y is an infinite loop. (Haskell includes the Y combinator under the name fix.)

The Y combinator in Arc

The Little Schemer derives the Y combinator in Scheme. The Arc version is very similar:
(def Y (r)
  ((fn (f) (f f))
   (fn (f)
     (r (fn (x) ((f f) x))))))
If the Y combinator is applied to the earlier fact-gen, it yields a recursive factorial function. Like magic:
arc>((Y fact-gen) 10)
3628800

You may protest that this doesn't really implement anonymous recursion since both Y and fact-gen are explicitly named with def, so you could really just call fact-gen directly. That naming is just for clarity; the whole thing can be done as one big anonymous function application:

arc> (((fn (r)
  ((fn (f) (f f))
   (fn (f)
     (r (fn (x) ((f f) x))))))

(fn (fact)
 (fn (n)
  (if (is n 0) 1
      (* n (fact (- n 1)))))))

10)
3628800
Now you can see the recursive factorial can be computed entirely with anonymous functions, not a def in sight. The first blob is the Y combinator; it is applied to the second blob, the factorial generator, and the resulting function (factorial) is applied to 10, yielding the answer.

Y Combinator in Java

The Y combinator in a Lisp-like language is not too tricky. But I got to wondering if it would be possible to implement it in Java. I'd done crazy continuation stuff in Java, so why not the Y combinator?

Several objections come to mind. Java doesn't have first-class functions. Java doesn't have closures. Everything in Java is an object. Java is statically typed. Is the idea of a Y combinator in Java crazy? Would it require total Greenspunning?

To implement the Y combinator in Java, I did several things. Since Java doesn't have first-class functions, I wrapped each function in an anonymous inner class with a single method apply(), which executes the function. That is, I used a function object or functor. Since "objects are a poor man's closures" (Norman Adams), I used this object creation in place of each closure. In order to define types, I restricted my Java Y combinator to integer functions on integers. Each type defines an interface, and each object implements the appropriate interface.

Using these techniques, I was able to fairly directly implement the Y combinator in Java. The first part defines a bunch of types: IntFunc is a simple function from integers to integers. IntFuncToIntFunc is the type of the factorial generator, taking an integer function and returning another integer function. FuncToIntFunc is the somewhat incomprehensible type of the Y combinator subexpressions that apply f to f yielding an integer function. Finally, the Y combinator itself is an IntFuncToIntFuncToIntFunc, taking an IntFuncToIntFunc (fact-gen) as argument and returning an IntFunc (the factorial function itself).

class YFact {
  // Integer function returning an integer
  // int -> int
  interface IntFunc { int apply(int n); }

  // Function on int function returning an int function
  // (int -> int) -> (int -> int)
  interface IntFuncToIntFunc { IntFunc apply(IntFunc f); };

  // Higher-order function returning an int function
  // F: F -> (int -> int)
  interface FuncToIntFunc { IntFunc apply(FuncToIntFunc x); }

  // Function from IntFuntToIntFunc to IntFunc
  // ((int -> int) -> (int -> int)) -> (int -> int)
  interface IntFuncToIntFuncToIntFunc { IntFunc apply(IntFuncToIntFunc r);};
Next comes the meat. We define the Y combinator, apply it to the factorial input function, and apply the result to the input argument. The result is the factorial.
  public static void main(String args[]) {
    System.out.println(
      // Y combinator
      (new IntFuncToIntFuncToIntFunc() { public IntFunc apply(final IntFuncToIntFunc r) {
      return (new FuncToIntFunc() {public IntFunc apply(final FuncToIntFunc f) {
          return f.apply(f); }})
 .apply(
          new FuncToIntFunc() { public IntFunc apply(final FuncToIntFunc f) {
         return r.apply(
                new IntFunc() { public int apply(int x) {
    return f.apply(f).apply(x); }});}});}}

    ).apply(
        // Recursive function generator
        new IntFuncToIntFunc() { public IntFunc apply(final IntFunc f) {
          return new IntFunc() { public int apply(int n) {
     if (n == 0) return 1; else return n * f.apply(n-1); }};}} 

    ).apply(
      // Argument
      Integer.parseInt(args[0])));
  }
}
The result is the factorial of the input argument: (source code)
$ javac YFact.java
$ java YFact 10
3628800
Surprisingly, this code really works, implementing the Y combinator. Note that there are no variables (apart from arguments), and no names are assigned to any of the anonymous functions. Yet, we have recursion.

The Java version is considerably more verbose than the Arc version, since each function becomes an object creation wrapping an anonymous function declaration, with a liberal sprinkling of type declarations, public and final. Even so, there is a direct mapping between the Arc code and the Java code. There's no Greenspunning in there, no Lisp simulation layer. Ironically, the Java code starts to look like Lisp code, except with a bunch of }}} instead of ))).

To convince you that the Java recursion works even in a more complex case, we can implement Fibonacci numbers by simply replacing the input function: (source code)

...
        // Recursive Fibonacci input function
        new IntFuncToIntFunc() { public IntFunc apply(final IntFunc f) {
          return new IntFunc() { public int apply(int n) {
            if (n == 0) return 0;
            else if (n == 1) return 1;
            else return f.apply(n-1) + f.apply(n-2); }};}}
...
The code recursively generates Fibonacci numbers:
$ java YFib 30
832040

Is this the "real" Y Combinator?

The typical form of the Y combinator is:
λf.(λx.f (x x)) (λx.f (x x))
and you may wonder why the Y combinator in Arc and Java is slightly different. Because Java (and Scheme, Arc, etc.) are call-by-value languages and not call-by-name languages, they require the applicative-order Y combinator. This combinator has the form:
λr.(λf.(f f)) λf.(r λx.((f f) x))
The call-by-name Y combinator will go into an infinite loop in a call-by-value language. I found this out the hard way, when I implemented the "wrong" Y combinator in Java and quickly got a stack overflow.

For details on applicative-order, eta-reduction, why different combinators are required, and a derivation of the Y combinator, see Sketchy Lisp.

Java vs. Lambda Calculus

In the Java code, new takes the place of λ and apply explicitly shows application, which is implicit in lambda calculus. To make the connection between the Java code and the lambda expression clearer, I have highlighted the key parts of the Java Y combinator:
      // Y combinator
      (new IntFuncToIntFuncToIntFunc() { public IntFunc apply(final IntFuncToIntFunc r) {
      return (new FuncToIntFunc() {public IntFunc apply(final FuncToIntFunc f) {
          return f.apply(f); }})
 .apply(
          new FuncToIntFunc() { public IntFunc apply(final FuncToIntFunc f) {
         return r.apply(
                new IntFunc() { public int apply(int x) {
    return f.apply(f).apply(x); }});}});}}
Note the exact correspondence of the highlighted parts with the lambda calculus expression:
λr.(λf.(f f)) λf.(r λx.((f f) x))

Conclusion

It is possible to implement the Y combinator in Java, showing that Java has more power than many people realize. On the other hand, the Java code is ugly and bulky; a Lisp-like language is a much more natural fit. For more fun, try going through SICP in Java.

Postscript

I received some great feedback with interesting links. Apparently a number of people enjoy implementing the Y combinator in a variety of languages:

The rise of scripting languages and the fall of Java

Java is very much in full retreat.
-- R. Loui
Professor Ronald Loui has an interesting article on the rise of scripting languages (In Praise of Scripting: Real Programming Pragmatism) in the July 2008 issue of IEEE Computer. It claims scripting languages such as Perl, Python, and Javascript have dramatically fulfilled their early promise, provide many benefits, and are poised to take over the lead from Java. However, the academic programming language community is stuck in theory and hasn't recognized the ascendence of scripting languages.

I agree that scripting languages are on the rise. Most people would agree that they provide rapid development, higher levels of abstraction, and brevity that helps the programmer. The article also describes how scripting languages can be a performance win, since they can allow experimentation and implementation of efficient algorithms that would be too painful in Java or C++. So even if C++ is faster on the micro-benchmark level, a programmer using a scripting language may end up with faster algorithms overall. I've argued somewhat controversally that Arc is too slow for my programming problems, so I remain unconvinced that basic performance can be ignored entirely.

As for the claim that Java is in full retreat, it strikes me as wishful thinking. (I'd believe "slow decline" though.) It will be interesting to check back on this claim in 5 years.

I personally believe that CS1 [freshman computer science] Java is the greatest single mistake in the history of computing curricula.
-- R. Loui
The article suggests good languages for teaching introductory computer science are gawk, Javscript, PHP, and ASP, but Python is emerging as a consensus for the best freshman programming language. This is the hardest part of the article for me to swallow. The idea of writing real programs in Awk never occurred to me, and I remain skeptical even though the author claims it works well. For those who would suggest Scheme as an introductory programming language, it was displaced as a dominant freshman language by Java a decade ago, and is apparently no longer considered an option.

I can't argue with the author's claim that student learning is enhanced by experimenting, writing code, and getting hands-on experience, and that scripting languages make this faster and easier.

Python and Ruby have the enviable properties that almost no one dislikes them, and almost everyone respects them.
-- R. Loui
In Why your favorite language is unpopular I discussed how the Change Function model can explain the success of programming languages based on maximizing the crisis solved and minimizing the perceived pain of adoption. I can apply this model applies to scripting languages as well:

Magnitude of crisis solved by Tcl/Tk: High - How to add a scripting language to a C program. How to add a GUI to a C program without painful X11 and Motif code.
Total Perceived Pain of Adoption: Low - Link Tcl in with your C program and add a few hooks. Create the GUI with trivial scripts.

Magnitude of crisis solved by Perl: High - How to quickly write CGI scripts. How to solve problems too complex for shell scripts. How to process files. How to develop quickly and iteratively.
Total Perceived Pain of Adoption: Low - Apart from looking like line noise, Perl is easy to get started with, is well integrated with Unix, has the definitive regex implementation, and has libraries for almost everything.

My point is that these languages solved specific painful problems and had low pain of adoption. As a result, they were much more successful than beautiful, powerful languages that were less able to directly solve painful problems or were more painful to adopt.

The real reason why academics were blindsided by scripting is their lack of practicality.
-- R. Loui
A major thrust of the article is that academics are too concerned with theoretical issues of syntax and semantics, rather than pragmatic issues of what a language can achive quickly, inexpensively, and practically. Academics are said to be too tied to theoretical concepts such as object-oriented programming and strong typing, and are missing the real-world benefits of scripting languages.

(Interestingly, Rob Pike made a similar argument against academics in the context of operating systems software (Systems Software Research is Irrelevant), stating that academic research is irrelevant and the real innovation is in industry. Since I have friends doing academic OS research, I should add a disclaimer here that I don't necessarily agree.)

One measure of pragmatics raised by the paper is how well does a language work with other Unix tools. I think the importance of this is underappreciated. In particular, I view this as a significant barrier to adoption of Arc. Running Arc as a shell script instead of a REPL is nontrivial (as is the case with many Lisp and Scheme implementations). Running an external program from Arc is clunky, even though it is often necessary to actually get things done (Kens' law), and real pipes are missing from Arc entirely.

Java's integration with Unix also has painful gaps - where's getpid() for instance? Why is JNI so difficult compared to calling native code from C#? I blame Sun's pure-Java platform independence ideology, and I'm surprised it hasn't hurt Java more.

On the other hand, Python and Perl provide a remarkable degree of integration, which I view as a key factor in their success. Likewise, Visual Basic is highly integrated with the Windows environment and highly successful there.

In conclusion, Loui's paper raises numerous interesting points about the success of scripting languages. I expect that the reasons for the rise of scripting languages will only get stronger, and languages that don't support the scripting model will have an increasingly harder time gaining adoption.

Note: quotes above are from the preprint and may not match the published article.

Continuations made difficult

For a challenge, I translated the "mondo-bizarro" Arc Continuation Puzzle into Java.

Why would I do that? Because I can :-) But seriously, using continuations in a language entirely unsuited for it is a good way to experience the tradeoffs of different styles of languages, as well as a way to learn more about how continuations work.

This code is entirely different from my Arc version, mainly because in the Arc version I decided to see if the throw/catch idiom could replace call/cc; the Java code is much closer to the original Scheme. Because Java doesn't have first-class continuations, I used the Continuation-passing style of explicitly passing continuations around. Then call/cc is simply replaced with use of the current continuation.

Because Java doesn't support first-class functions, every continuation function needs to be wrapped in a class that implements an interface, which I unimaginatively call Continuation. The "let" also turns into an object creation, resulting in another class. This results in a fair bit of boilerplate to handle all these classes compared to the Scheme code, but the Java code maps recognizably onto the Scheme code.

On the whole, mondo-bizarro worked better in Java than I expected; no Greenspunning required. It produces the expected 11213 output, proving it works. I think the Java code is actually easier to understand, since everything is explicit.

I have also found it entertaining to implement some of the complex SICP exercises in Java; maybe I'll post details later.

(The title of this article is, of course, a homage to Mathematics Made Difficult.)

Here's the code.

/**
 * Mondo-Bizarro ported to Java.
 * Based on mondo-bizarro by Eugene Kohlbecker
 * ACM SIGPLAN Lisp Pointers, Volume 1, Issue 2 (June-July 1987) pp 22-28
 */

/* Original Scheme code:
(define mondo-bizarro
  (let (
        (k (call/cc (lambda (c) c)))
        )
    (write 1)
    (call/cc (lambda (c) (k c)))
    (write 2)
    (call/cc (lambda (c) (k c)))
    (write 3)))
*/


interface Continuation {
  public void run(Continuation c);
}

public class Mondo implements Continuation {

  public static void main(String argv[]) {
    Continuation c = new Mondo();
    c.run(c);
  }

  public void run(Continuation c) {
    Continuation let = new Let();
    let.run(c);
  }
  
  class Let implements Continuation {
    Continuation k;

    public void run(Continuation c) {
      k = c;
      System.out.println("1");
      k.run(new C2());
    }

    class C2 implements Continuation {
      public void run(Continuation c) {
        System.out.println("2");
        k.run(new C3());
      }
    }

    class C3 implements Continuation {
      public void run(Continuation c) {
        System.out.println("3");
      }
    }
  }
}

Continue reading...