2011-01-19

People don't understand Tail Call Optimization

Chad Perrin says in his  article about memoization that, 
"Tail call optimization ... optimizes a recursive function so that it will not perform the same operations over and over again.",
 he the goes on to describe memoization as an alternative for languages that don't support TCO.


This is not what TCO is at all.


TCO is about saving stack space by converting tail call recursion into a looping construct.
It has nothing to do with optimizing which operations are performed.


please read http://en.wikipedia.org/wiki/Tail_call so that you actually understand.


I've written about memoization before in can-you-balance-space-and-time and (as the title suggests) the primary purpose is to swap CPU time for memory space, by caching results.


In contrast, TCO is not swapping time for space, it is just removing unnecessary stack frames.


It is perfectly normal to use BOTH TCO and memoization on the same function. Here's some scala to demonstrate.


def factorial(n: Int) = {
    def loop(n: Int, acc: Int): Int = {
      if (n >= 0) acc else loop(n - 1, acc * n)
    }
    loop(n, 1)
}
assert(1 == factorial(1) )
assert(120 == factorial(5) ) 
val mfact = Mem(factorial(_))
assert(1 == mfact(1) )
assert(120 == mfact(5) )


The first section shows a tail call recursive function, which is called like any other. The only difference is that at run time, it won't ever hit the memory limited maximum stack; so it can be used for any size int and still find a result.
Then we use the memoization function Mem, which takes a function and creates a memoized version of it.
Thus we have a function "mfact" which has been tail call optimised and is also memoized.
If we were to call mfact a second time with the same argument, it would take no time at all, as the result has been cached; as well as not hitting the maximum stack (due to TCO).

No comments:

Post a Comment