Lisp History July 12th, 2009
Patrick Stein

To make sure that I’m extra prepared for my upcoming talk on Lisp Macros, I’ve been scouring the net reading everything I can find that other folks have written about macros.

This set of lecture notes is a really nice explanation of basic Lisp macros and the whole backquote/comma paradigm. I think it’s off base with its Lisp history though.

It says:

In the late 1950’s, John McCarthy invented Lisp to cover many common weaknesses of languages such as FORTRAN, COBOL and ALGOL. Thus, Lisp was the first language to truly embrace concepts such as recursion, first-class treatment of functions (i.e. they are treated just like any other data), weak typing, and macros.

Maybe I am way off base, but my understanding was that McCarthy made Lisp because FORTRAN was useful for computation and nigh impossible for proving anything. Turing machines were useful for proving things and nigh impossible for computation. My understanding was that Lisp was created to be something easier to compute in than a Turing machine yet easier to prove things in than the other languages at the time. My understanding was that Lisp macros came around three or four years later.

I suppose it’s arguable that not being able to prove things in it is a common weakness in programming languages. Heck, you’d be hard pressed to prove anything about Common Lisp without restricting yourself to a ridiculously small subset, and Lisp is way better off than Ruby, Python, Java, C, C#, C++, etc. I suppose it’s also arguable that the lecture notes don’t claim that McCarthy invented Lisp with macros in situ, but rather that Lisp embraced them quickly.

Me, I think it was kind of an accident. The very same things that made it easy to prove things about Lisp programs also made it easy to graft in macros. The functional nature, played out in s-expressions, is suited for macros in a way that infix languages will never be.

“Visualizing Quaternions” by Andrew J. Hanson July 10th, 2009
Patrick Stein

Sunday night, I finished reading Visualizing Quaternions by Andrew J. Hanson. Unfortunately, the events of the week have kept me from writing this summary sooner. I feel like it would have been longer had I written it Monday. Alas, let’s play the cards in front of me.

This is a wonderful book. In addition to having gobs and gobs of useful content, it is a fine specimen of book design. The layout and illustrations are tremendous.

It tends to go from well-written, conversational passages that explain things in great detail to terse passages that mean nothing if you’re not a tensor-analysis whiz. But, even if you skim the parts that assume you know things you don’t, there’s still lots of book here to read.

There is even a nice appendix about Clifford algebras which folds in nicely with the complex number, quaternions, Clifford algebra posts that I’ve made here recently. If you do three-dimensional simulations or you like a good mathematical read, you should give this book a look.

TC Lispers Meeting — July 14th, 2009 July 7th, 2009
Patrick Stein

The second TC Lispers Meeting will be July 14th from 6:00pm to 8:00pm. It will be at the Common Roots Cafe on 26th and Lyndale.

Please check out the agenda at tclispers.org, and fill out the Are You Coming? poll over there, too.

Clifford Algebras for Rotating, Scaling, and Translating Space July 6th, 2009
Patrick Stein

In (very much) earlier articles, I described:

Today, it is time to tackle rotating, translating, and scaling three-dimensional space using Clifford algebras.

Three dimensions now instead of two

Back when we used Clifford algebras to rotate, translate, and scale the plane, we were using the two-dimesional Clifford algebra. With the two-dimensional Clifford algebra, we represented two-dimensional coordinates (x,y) as xe_1 + ye_2. It shouldn’t surprise you then to find we’re going to represent three-dimensional coordinates (x,y,z) as xe_1 + ye_2 + ze_3.

As before, we will have e_1e_1 = 1 and e_2e_2 = 1. Similarly, we will have e_3e_3 = 1. In the two-dimesional case, we showed that e_1e_2 = -e_2e_1. By the same logic as the two-dimensional case, we also find that e_1e_3 = -e_3e_1 and e_2e_3 = - e_3e_2. We could potentially also end up multiplying e_1, e_2, and e_3 all together. This isn’t going to be equal to any combination of the other things we’ve seen so we’ll just leave it written e_1e_2e_3.

Read the rest of this entry ⇒

Why Not Return A Function? July 2nd, 2009
Patrick Stein

This morning, I was catching up on the RSS feeds I follow. I noticed an interesting snippet of code in the Abstract Heresies journal for the Discrete-time Fourier Transform. Here is his snippet of code:

(define (dtft samples)
  (lambda (omega)
    (sum 0 (vector-length samples)
        (lambda (n)
          (* (vector-ref samples n)
             (make-polar 1.0 (* omega n)))))))

My first thought was That’s way too short. Then, I started reading through it. My next thought was, maybe I don’t understand scheme at all. Then, my next thought was, I do understand this code, it just didn’t do things the way I expected.

Here, I believe is a decent translation of the above into Common Lisp:

(defun dtft (samples)
  #'(lambda (omega)
      (loop for nn from 0 below (length samples)
           summing (let ((angle (* omega nn)))
                     (* (aref samples nn)
                        (complex (cos angle) (sin angle)))))))

Now, what I find most interesting here is that most implementations you’ll find for the DTFT (Discrete-Time Fourier Transform) take an array of samples and a wavelength, and returns a result. This, instead, returns a function which you can call with a wavelength omega. It returns an evaluator. This is an interesting pattern that I will have to try to keep in mind. I have used it in some places before. But, I am sure there are other places that I should have used it, and missed. This is one where I would have missed.

Usage for the above would go something like:

(let ((dd (dtft samples)))
  (dd angle1)
  (dd angle2)
  ...)

For those not into Lisp either (who are you?), here is a rough translation into C++.

#include <cmath>
#include <complex>

class DTFT {
  unsigned int len;
  double* samples;

public:
  DTFT( double* _samples, unsigned int _len ) {
    this->len = _len;
    this->samples = new double[ this->len ];
    for ( unsigned int ii=0; ii < this->len; ++ii ) {
      this->samples[ ii ] = _samples[ ii ];
    }
  }

  ~DTFT( void ) {
    delete[] this->samples;
  }

  std::complex< double > operator () ( double omega ) const {
    std::complex< double > sum( 0.0, 0.0 );
    for ( unsigned int ii=0; ii < this->len; ++ii ) {
      sum += this->samples[ ii ] * std::polar( 1.0, omega );
    }
    return sum;
  }
};

With usage like:

DTFT dd( samples, 1024 );
dd( angle1 );
dd( angle2 );
...

So, six lines of Scheme or Lisp. Twenty-five lines of C++ including explicit definition of a class to act as a pseudo-closure, explicit copying and management of the samples buffer, etc. I suppose, a more direct translation would have used a std::vector to hold the samples and would have just kept a pointer to the buffer. That would have shaved off six or seven lines and the whole len and _len variables.

Updates In Email

Email:

l