“Visualizing Quaternions” by Andrew J. Hanson July 10th, 2009
Patrick Stein

Sunday night, I finished reading Visualizing Quaternions by Andrew J. Hanson. Unfortunately, the events of the week have kept me from writing this summary sooner. I feel like it would have been longer had I written it Monday. Alas, let’s play the cards in front of me.

This is a wonderful book. In addition to having gobs and gobs of useful content, it is a fine specimen of book design. The layout and illustrations are tremendous.

It tends to go from well-written, conversational passages that explain things in great detail to terse passages that mean nothing if you’re not a tensor-analysis whiz. But, even if you skim the parts that assume you know things you don’t, there’s still lots of book here to read.

There is even a nice appendix about Clifford algebras which folds in nicely with the complex number, quaternions, Clifford algebra posts that I’ve made here recently. If you do three-dimensional simulations or you like a good mathematical read, you should give this book a look.

Clifford Algebras for Rotating, Scaling, and Translating Space July 6th, 2009
Patrick Stein

In (very much) earlier articles, I described:

Today, it is time to tackle rotating, translating, and scaling three-dimensional space using Clifford algebras.

Three dimensions now instead of two

Back when we used Clifford algebras to rotate, translate, and scale the plane, we were using the two-dimesional Clifford algebra. With the two-dimensional Clifford algebra, we represented two-dimensional coordinates (x,y) as xe_1 + ye_2. It shouldn’t surprise you then to find we’re going to represent three-dimensional coordinates (x,y,z) as xe_1 + ye_2 + ze_3.

As before, we will have e_1e_1 = 1 and e_2e_2 = 1. Similarly, we will have e_3e_3 = 1. In the two-dimesional case, we showed that e_1e_2 = -e_2e_1. By the same logic as the two-dimensional case, we also find that e_1e_3 = -e_3e_1 and e_2e_3 = - e_3e_2. We could potentially also end up multiplying e_1, e_2, and e_3 all together. This isn’t going to be equal to any combination of the other things we’ve seen so we’ll just leave it written e_1e_2e_3.

Read the rest of this entry ⇒

Why Not Return A Function? July 2nd, 2009
Patrick Stein

This morning, I was catching up on the RSS feeds I follow. I noticed an interesting snippet of code in the Abstract Heresies journal for the Discrete-time Fourier Transform. Here is his snippet of code:

(define (dtft samples)
  (lambda (omega)
    (sum 0 (vector-length samples)
        (lambda (n)
          (* (vector-ref samples n)
             (make-polar 1.0 (* omega n)))))))

My first thought was That’s way too short. Then, I started reading through it. My next thought was, maybe I don’t understand scheme at all. Then, my next thought was, I do understand this code, it just didn’t do things the way I expected.

Here, I believe is a decent translation of the above into Common Lisp:

(defun dtft (samples)
  #'(lambda (omega)
      (loop for nn from 0 below (length samples)
           summing (let ((angle (* omega nn)))
                     (* (aref samples nn)
                        (complex (cos angle) (sin angle)))))))

Now, what I find most interesting here is that most implementations you’ll find for the DTFT (Discrete-Time Fourier Transform) take an array of samples and a wavelength, and returns a result. This, instead, returns a function which you can call with a wavelength omega. It returns an evaluator. This is an interesting pattern that I will have to try to keep in mind. I have used it in some places before. But, I am sure there are other places that I should have used it, and missed. This is one where I would have missed.

Usage for the above would go something like:

(let ((dd (dtft samples)))
  (dd angle1)
  (dd angle2)
  ...)

For those not into Lisp either (who are you?), here is a rough translation into C++.

#include <cmath>
#include <complex>

class DTFT {
  unsigned int len;
  double* samples;

public:
  DTFT( double* _samples, unsigned int _len ) {
    this->len = _len;
    this->samples = new double[ this->len ];
    for ( unsigned int ii=0; ii < this->len; ++ii ) {
      this->samples[ ii ] = _samples[ ii ];
    }
  }

  ~DTFT( void ) {
    delete[] this->samples;
  }

  std::complex< double > operator () ( double omega ) const {
    std::complex< double > sum( 0.0, 0.0 );
    for ( unsigned int ii=0; ii < this->len; ++ii ) {
      sum += this->samples[ ii ] * std::polar( 1.0, omega );
    }
    return sum;
  }
};

With usage like:

DTFT dd( samples, 1024 );
dd( angle1 );
dd( angle2 );
...

So, six lines of Scheme or Lisp. Twenty-five lines of C++ including explicit definition of a class to act as a pseudo-closure, explicit copying and management of the samples buffer, etc. I suppose, a more direct translation would have used a std::vector to hold the samples and would have just kept a pointer to the buffer. That would have shaved off six or seven lines and the whole len and _len variables.

Speedy Matrix Multiplication in Lisp, Again… June 30th, 2009
Patrick Stein

In response to my earlier post about hidden memory allocations, Jason Cornez pointed out how to rework my loop so that Allegro wouldn’t need to allocate memory. The new structure avoids using the summing keyword in (loop …) and doesn’t use the return-value from (loop …).

(declaim (ftype (function ((simple-array single-float (12))
                           (simple-array single-float (3))
                           (simple-array single-float (3)))
                          (simple-array single-float (3))) mvl*-na))
(defun mvl*-na (matrix vec ret)
  (declare (type (simple-array single-float (12)) matrix)
           (type (simple-array single-float (3)) vec)
           (type (simple-array single-float (3)) ret)
           (optimize (speed 3) (safety 0)))
  (loop for jj fixnum from 0 below 3
     do (let ((offset (* jj 4)))
          (declare (type fixnum offset))
          (let ((current (aref matrix (+ offset 3))))
            (declare (type single-float current))
            (loop for ii fixnum from 0 below 3
               for kk fixnum from offset below (+ offset 3)
               do (incf current
                        (* (aref vec ii)
                           (aref matrix kk))))
            (setf (aref ret jj) current))))
  ret)

Now, the Allegro moved up in the rankings. All of the implementations (except 32-bit Clozure) ran slightly faster with this version. Allegro improved the most here though as its allocation/garbage dropped to nothing.

  wall non-GC alloced
SBCL 1.0.29 0.406s 0.406s 0
CMUCL 19f 0.539s 0.539s 32
Allegro 8.1 (Personal) 0.720s 0.720s 0
Clozure-64bit 1.3-r11936 1.100s 1.100s ?? 0
Clozure-32bit 1.3-r11936 6.884s 5.023s 1,680,000,000
LispWorks 5.1 (Personal) 12.587s ?? 3,120,000,516
ECL 9.6.2 30.875s ?? 17,280,000,256
GNU CLISP 2.47 88.008s 74.455s 2,160,000,000

CMUCL compiled with 30 Notes. I should look into those some time. It still ran quite fast though.

Allegro definitely liked this version better. It kept from allocating anything.

For comparison, I am including the previous runtimes here as well:

  wall non-GC alloced
SBCL 1.0.29 0.444s 0.444s 0
CMUCL 19f 0.567s 0.567s 0
Clozure-64bit 1.3-r11936 1.272s 1.272s ?? 0
Clozure-32bit 1.3-r11936 5.009s 4.418s 1,200,000,000
Allegro 8.1 (Personal) 6.131s 2.120s 1,440,000,000
LispWorks 5.1 (Personal) 14.054s ?? 3,360,000,480
ECL 9.6.2 33.009s ?? 18,240,000,256
GNU CLISP 2.47 93.190s 77.356s 2,520,000,000

To CLOS or not to CLOS June 29th, 2009
Patrick Stein

I am working on some Lisp code. I am trying to mimic the basic structure of a large C++ project. I think the way the C++ project is structured is a good fit for the tasks involved.

Most of the C++ stuff is done with classes. Most of the methods of those classes are virtual methods. Many of the methods will be called multiple times every hundredth of a second. Very few of the virtual methods ever get overridden by subclasses.

So, I got to thinking. Does it make sense for me to use CLOS stuff at all for most of this? Would it be significantly faster to use (defstruct …) and (defun …) instead of (defclass …) and (defgeneric …)?

My gut instinct was: Yes. My first set of test code didn’t bear that out. For both the classes and the structs, I used (MAKE-INSTANCE …) to allocate and (WITH-SLOTS …) to access. When doing so, the classes with generic functions took about 1.5 seconds for 10,000,000 iterations under SBCL while the structs with non-generic functions took about 2.2 seconds.

For the next iteration of testing, I decided to use accessors instead of slots. I had the following defined:

(defclass base-class ()
  ((slot-one :initform (random 1.0f0) :type single-float
             :accessor class-slot-one)
   (slot-two :initform (random 1.0f0) :type single-float
             :accessor class-slot-two)))

(defclass sub-class (base-class)
  ((slot-three :initform (random 1.0f0) :type single-float
               :accessor class-slot-three)
   (slot-four  :initform (random 1.0f0) :type single-float
               :accessor class-slot-four)))

(defstruct (base-struct)
  (slot-one (random 1.0f0) :type single-float)
  (slot-two (random 1.0f0) :type single-float))

(defstruct (sub-struct (:include base-struct))
  (slot-three (random 1.0f0) :type single-float)
  (slot-four (random 1.0f0) :type single-float))

I then switched from using calls like (slot-value instance ‘slot-three) in my functions to using calls like (class-slot-three instance) or (sub-struct-slot-three instance). Now, 10,000,000 iterations took 2.6 seconds for the classes and 0.3 seconds for the structs in SBCL. In Clozure (64-bit), 10,000,000 iterations with classes and with method-dispatch and with accessors took 11.0 seconds, with classess and without method-dispatch but with accessors took 6.1 seconds, and with structs and without method-dispatch and with accessors took 0.4 seconds.

There is much more that I can explore to tune this code. But, for now, I think the answer is fairly easy. I am going to use structs, functions, and accessors for as many cases as I can. Here is the generic-function, class, struct test code with accessors. The first loop uses classes and generic functions. The second loop uses classes and regular functions. The third loop uses structs and regular functions.

Updates In Email

Email:

l