This blog post gives a cool usage of laziness. It’s a framework for doing automatic differentiation in Haskell. It’s cool, though, because it lazily produces infinite derivatives.

Automatic differentiation is an alternative to symbolic differentiation and numeric differentiation. Basically a numeric function, even if it can’t easily be reduced to a single expression, can automagically generate derivatives so long as it is built up of some arithmetic primitives. One has to make liberal usage of overloading in Haskell, though, so that functions think that they’re working over normal numbers.

### Like this:

Like Loading...

*Related*

April 15, 2007 at 9:15 am

To make a derivative-taking operator fully general, it is important to be able to apply the derivative-taking function to functions that themselves internally take derivatives. This is difficult in Haskell; try using the code in the post referred to and you’ll see the problem. The issue is discussed in some detail in an IFL-2005 presentation Perturbation Confusion and Referential Transparency: Correct Functional Implementation of Forward-Mode AD by Jeff Siskind and Barak Pearlmutter (me). We also discussed using lazy towers of higher-order derivatives in the multivariate case in a POPL-2007 paper, Lazy Multivariate Higher-Order Forward-Mode AD, which includes working code.

The above all concerns forward-mode automatic differentiation. The reverse-mode AD construct is the one that is useful for taking gradients in high dimensions, and we’ve also been working on incorporating that into functional languages; see our web page on this FP/AD/Stalingrad stuff.