Optimizing Lisp May 24th, 2009
Patrick Stein

Paul Reiners recently posted some code to the TC Lispers mailing list. The code began from Paul Graham’s ANSI Common Lisp book. Exercise 13b has you adding declarations to some code so that the compiler can better take advantage of the types involved. Oddly, the code ran more slowly with his typing than it did without it. I had the same experience when I tried this code under SBCL 1.0.23 on Mac OS X.

Here is Paul’s original code which he made available with Creative Commons License the Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License.

I stripped out the declarations and started adding them in one by one, trying to deal with as many compiler notes as I could figure out in the process. First, in the base case, the following took 22.5 seconds on my computer.

(time (ray-test 8))

After much tweaking, I have gotten it down to just under 11 seconds. Most of that came with declaring the types of the slots in the (defstruct …)‘s and these three declarations:

(declaim (ftype (function (long-float) long-float) sq))
(declaim (ftype (function (long-float
                           long-float
                           long-float) long-float) mag))
(declaim (ftype (function (long-float
                           long-float
                           long-float)
                          (values long-float
                                  long-float
                                  long-float)) unit-vector))

This declaration, however, shot me back up to 22 seconds.

(declaim (ftype (function (long-float
                           long-float
                           long-float)
                          long-float)) minroot))

Of course, it really should have been returning (or long-float nil). But, that didn’t help either. From the compiler notes, it seems that (min …) in SBCL doesn’t deal well with unboxed floats. I will have to look harder at that section and ask on the SBCL lists. I believe that I tried decorating the expressions there, too.

(the long-float (min (the long-float
                       (/ (the long-float (+ (the long-float (- b))
                                             discrt)
                               (the long-float (* 2L0 a)))))
                     (the long-float
                       (/ (the long-float (- (the long-float (- b))
                                             discrt)
                               (the long-float (* 2L0 a)))))))

Lisp Patterns Question W.R.T. Clifford Algebras May 20th, 2009
Patrick Stein

Since I moved my website to WordPress, I have been watching what search-engine searches land people on my page.

Sadly, two of the things that get people to my page most often still have not been moved over to the new site from the old. One of those things is a C++ template library for Clifford algebras. I am going to move that stuff over this week, I promise. But, I thought I might also make a Lisp library for Clifford algebras, too.

An example Clifford algebra might be C_{3,0}. A generic element in this algebra is of the form:

a_0 + a_1 e_1 + a_2 e_2 + a_3 e_3 + a_{1,2} e_{1,2} + a_{1,3} e_{1,3} + a_{2,3} e_{2,3} + a_{1,2,3} e_{1,2,3}

where the a_* are coefficients. For ease in dealing with these items, I will probably store that in a vector that looks something like this:
#(a_0 #(a_1 a_2 a_3) #(a_{1,2} a_{1,3} a_{2,3}) a_{1,2,3})

If you want a simple element though (where most of the coefficients are zero), you shouldn’t have to stare down all of that stuff or remember that a_{1,3} comes before a_{2,3}. I want you to be able to make 3 + 4 e_1 + 5 e_{1,2,3} more simply. Something like one of the following:

(defvar *a* (ca-element '(3 (4 1) (5 1 2 3)) :p 3))
(defvar *a* (ca-element '(3 (4 . :e1) (5 . :e1-2-3)) :p 3))
(defvar *a* (ca-element :s 3 :e1 4 :e1-2-3 5 :p 3))

Similarly, I will define something akin to (aref …) so that one can check or change a_{1,2} like one of the following:

(caref *a* 1 2)
(caref *a* :e1-2)

By analogy with the complex numbers, I should instead have:

(e1-2 *a*)

But, that just doesn’t scale very well.

I am leaning toward using a list of numbers to tell which a_* is being referenced. This would allow greater flexibility in other functions. But, it’s also less mathy.

Does anyone have any suggestions? Should I really be supporting both? Through the same function names or through (caref …) and (caref* …) or something?

Another Iteration on Iterated Functions May 19th, 2009
Patrick Stein

Earlier, I started exploring iterated functions trying to make some way-points between Mandelbrot set iterations. In that post, I left off with the following: if g(x) = x\sqrt{a} + \frac{b}{1 + \sqrt{a}}, then g^2(x) = ax + b. Today, I am going to tackle the more general case of finding g(x) so that g^n(x) = ax + b.

First, a caveat

You can see already in the above that we had some arbitrary choices. We could have chosen either the positive or negative square root of a. This is essentially a death knell for my dream of coming up with an equation to do an arbitrary number of partial iterations in a completely natural way.

Okay, so it’s not the end-all-and-be-all. It’s not unique. At least, it is more natural than pretending like things proceed directly from one iteration to the next in a straight line or along some fitted curve.

In the end, what I really want is some way to define f^{\frac{1}{n}}(x) that isn’t too ugly compared to f(x) itself. I want f^{\frac{1}{n}}(x) to be continuous in the sense that if you pick some positive \delta, then I can come up with some n so that | f^{\frac{a+1}{n}}(x) - f^{\frac{a}{n}}(x) | < \delta for all a \in [ 0, 1, 2, \ldots, n-1 ] for every x in some useful region.

I believe there will be a large class of functions for which this is possible. With f(x) = x + b, defining f^{\frac{1}{n}}(x) = x + \frac{b}{n} is continuous in this sense.

Alright, back to work

We want to find g(x) so that g^n(x) = f(x) = ax + b. Again, we’re going to hope that it is of the form: \alpha{x} + \beta. (In light of the fact that things aren’t unique, maybe I should be saying we’re going to hope that something of this form works.) So, let’s assume we have such a g(x) = \alpha{x} + \beta.

Then, g^n(x) = \alpha\left( g^{n-1}(x) \right) + \beta = \alpha\left( \alpha\left( g^{n-2}(x) \right) + \beta\right) + \beta, and so on. We can show by induction on n that the general case is g^n(x) = \alpha^n x + \beta \sum_{k=0}^{n-1} \alpha^k. To get this to come out to ax + b we then need \alpha = \sqrt[n]{a} and \beta = \frac{b}{\sum_{k=0}^{n-1} \sqrt[n]{a^k}}. With a little playing with the summation, we can make this \beta = \frac{b\left(1 - \sqrt[n]{a}\right)}{1 - a}.

What about non-linear iterations?

The iteration in the Mandelbrot set uses f(z) = z^2 + c. Can we find a candidate for f^{\frac{1}{n}}(z)?

Let’s start with just f(z) = z^2 for a moment without the c involved. It’s going to be trickier to get something that comes out squared. With constant functions, you can feed the result back in as often as you like and still be constant. With linear functions, you can feed them back upon themselves as often as you like and still be linear. With something squared, you end up with something to the fourth power when you feed it back upon itself. But, maybe we can do with the exponents what we did with the constants in the previous section.

If we let g(z) = z^\alpha, then g^n(z) = \left(\left(z^\alpha\right)^\alpha\ldots\right)^\alpha = z^{\alpha^n}. So, if we let \alpha = \sqrt[n]{2}, then we have g^n(z) = f(z) = z^2. I believe this is continuous in the way that I want it to be. I will have to check that at some point though.

What happens now though if we add back in a constant? Let’s try g(z) = z^\alpha + \beta. Here is g^3(z): \left( \left( z^\alpha + \beta \right)^\alpha + \beta \right)\alpha + \beta. That is outright ugly. I’m not even sure where to start tackling that. So, I think I’ll call it a day here at the keyboard and start hitting the whiteboard.

What Do You Mean By Close? May 18th, 2009
Patrick Stein

One of the blogs that I recently added to my RSS reader is God Plays Dice. That blog has quite a number of mathematical looks at everyday questions. One question, in particular, caught my attention: Which two states are closest together?

Your first reaction is probably Lots of states have a neighbor that’s zero miles away! But, then, you figure that’s not a very interesting question. He must have had a more interesting question than that in mind. What was it?

Within that article, he limits this to states which do not share any boundary points (intersection of the closures of their interiors is empty).

That’s not where I thought he was going to go though. I thought he was going to measure the distance from state A to state B differently. I thought he was going to say that the distance from state A to state B is the average distance from points in state A to their nearest points in state B. In other words, the distance from state A to state B is the expected minimum distance a crow in state A would need to fly to get to state B (assuming crows are equiprobable at all spots).

This is interesting because, in particular, it is not symmetric. The distance from state A to state B may be greater than the distance from state B to state A. Consider Louisiana and Texas, for example. In general, mathematical distance functions are required to be symmetric. I will have to explore what things break when they are not.

Flooded with Math May 18th, 2009
Patrick Stein

I am always on the lookout for Math-focused blogs to add to my RSS reader. This weekend, I went through the blogs already on my list and followed the sidebar Blogrolls and Links on them. I added another twenty-ish blogs.

Now, I am up to my neck in old math articles. I’ve gotten it down now from 877 to 623 articles. There is lots to share from what’s out there. At the moment, I will just leave you with a few:

l