## The Anti-ConsFebruary 27th, 2012 Patrick Stein

### Motivation

I no longer remember what led me to this page of synthetic division implemented in various languages. The author provides a common lisp implementation for taking a list representing the coefficients of a polynomial in one variable $x$ and a number $\alpha$ and returning the result of dividing the polynomial by $(x-\alpha)$.

The author states: I’m very sure this isn’t considered Lispy and would surely seem like an awkward port from an extremely Algol-like mindset in the eyes of a seasoned Lisper. In the mood for the exercise, I reworked his code snippet into slightly more canonical lisp while leaving the basic structure the same:

(defun synthetic-division (polynomial divisor)
(let* ((result (first polynomial))
(quotient (list result)))
(dolist (coefficient (rest polynomial))
(setf result (* result divisor))
(incf result coefficient)
(push result quotient))
(let ((remainder (pop quotient)))
(list :quotient (nreverse quotient) :remainder remainder))))

From there, I went on to implement it using tail recursion to get rid of the #'setf and #'incf and #'push:

(defun synthetic-division-2 (polynomial divisor)
(labels ((divide (coefficients remainder quotient)
(if coefficients
(divide (rest coefficients)
(+ (* divisor remainder) (first coefficients))
(list* remainder quotient))
(list :quotient (reverse quotient) :remainder remainder))))
(divide (rest polynomial) (first polynomial) nil)))

What I didn’t like about this was the complexity of calling the tail-recursive portion. If I just called it like I wished to (divide polynomial 0 nil) then I ended up with one extra coefficient in the answer. This wouldn’t do.

### The Joke

There’s an old joke about a physicist, a biologist, and a mathematician who were having lunch at an outdoor café. Just as their food was served, they noticed a couple walk into the house across the street. As they were finishing up their meal, they saw three people walk out of that very same house.

The physicist said, We must have miscounted. Three must have entered before.

The biologist said, They must have pro-created.

The mathematician said, If one more person enters that house, it will again be empty.

### The Anti-Cons

What I needed was a more-than-empty list. I needed a negative cons-cell. I needed something to put in place of the nil in (divide polynomial 0 nil) that would annihilate the first thing it was cons-ed to.

I haven’t come up with the right notation to make this clear. It is somewhat like a quasigroup except that there is only one inverse element for all other elements. Let’s denote this annihilator: $\omega$. Let’s denote list concatenation with $\oplus$.

Only having one inverse-ish element means we have to give up associativity. For lists $a$ and $b$, evaluating $(a \oplus \omega) \oplus b$ equals $b$, but $a \oplus (\omega \oplus b)$ equals $a$.

Associativity is a small price to pay though for a prettier call to my tail-recursive function, right?

### The Basic Operations

For basic operations, I’m going to need the anti-cons itself and a lazy list of an arbitrary number of anti-conses.

(defconstant anti-cons 'anti-cons)

(defclass anti-cons-list ()
(:default-initargs :length 1))

(defmethod print-object ((obj anti-cons-list) stream)
(prin1 (loop :for ii :from 1 :to (anti-cons-list-length obj)
:collecting 'anti-cons)
stream)))

Then, I’m going to make some macros to define generic functions named by adding a minus-sign to the end of a Common Lisp function. The default implementation will simply be the common-lisp function.

(defmacro defun- (name (&rest args) &body methods)
(let ((name- (intern (concatenate 'string (symbol-name name) "-"))))
(defgeneric ,name- (,@args)
(:method (,@args) (,name ,@args))
,@methods)))

I’m even going to go one step further for single-argument functions where I want to override the body for my lazy list of anti-conses using a single form for the body:

(defmacro defun1- (name (arg) a-list-form &body body)
(defun- ,name (,arg)
(:method ((,arg anti-cons-list)) ,a-list-form)
,@body))

I need #'cons- to set the stage. I need to be able to cons an anti-cons with a normal list. I need to be able to cons an anti-cons with a list of anti-conses. And, I need to be able to cons something other than an anti-cons with a list of anti-conses.

(defun- cons (a b)
(:method ((a (eql 'anti-cons)) (b list))
(if (null b)
(make-instance 'anti-cons-list)
(cdr b)))

(:method ((a (eql 'anti-cons)) (b anti-cons-list))
(make-instance 'anti-cons-list :length (1+ (anti-cons-list-length b))))

(:method (a (b anti-cons-list))
(let ((b-len (anti-cons-list-length b)))
(when (> b-len 1)
(make-instance 'anti-cons-list :length (1- b-len))))))

Now, I can go on to define some simple functions that can take either anti-cons lists or regular Common Lisp lists.

(defun1- length (a) (- (anti-cons-list-length a))

(defun1- car (a) :anti-cons)

(defun1- cdr (a)
(let ((a-len (anti-cons-list-length a)))
(when (> a-len 1)
(make-instance 'anti-cons-list :length (1- a-len)))))

(defun1- cadr (a) (when (> (anti-cons-list-length a) 1) :anti-cons))
(defun1- caddr (a) (when (> (anti-cons-list-length a) 2) :anti-cons))

To give a feel for how this all fits together, here’s a little interactive session:

ANTI-CONS> (cons- anti-cons nil)
#<(ANTI-CONS)>

ANTI-CONS> (cons- anti-cons *)
#<(ANTI-CONS ANTI-CONS)>

ANTI-CONS> (cons- :a *)
#<(ANTI-CONS)>

ANTI-CONS> (length- *)
-1

### Denouement

Only forty or fifty lines of code to go from:

(divide (rest polynomial) (first polynomial) nil)

To this:

(divide polynomial 0 (cons anti-cons nil))

Definitely worth it.

## Complex Numbers for Rotating, Translating, and Scaling the PlaneJune 7th, 2009 Patrick Stein

A good friend of mine recently discovered some of the fun things you can do with complex numbers if you’re using them to represent points in the plane. Yesterday, I re-read a passage by Tony Smith about why one should be interested in Clifford algebras. Tony Smith’s passage included all of the fun one can have with the complex plane and extends it to three, four, five, and more dimensions. I thought, I should segue from the complex numbers in the plane to Clifford algebras to quaternions in 3-space to Clifford algebras again in a series of posts here.

### What are Complex Numbers

Say you’re playing around with polynomials. You start playing with the equation $z^2 - 1 = 0$. WIth a little fiddling, you find this is equivalent to $z^2 = 1$. Then, you take the square root of both sides to find that $z = \pm \sqrt{1} = \pm 1$. We started with a polynomial equation in one variable in which the highest exponent was two and we found two answers.

Pounding your chest and sounding your barbaric yawp, you move on to $z^2 + 1 = 0$. This should be easy, right? With the same fiddling, we find $z^2 = -1$ and then $z = \pm \sqrt{-1}$.

Uh-oh. What do we do now? We can’t think of any number that when multiplied by itself gives us a negative number. If we start with zero, we end with zero. If we multiply a positive number by itself, we get a positive number. If we multiply a negative number by itself, we get a positive number. Again!

## Trying to Short-Stop Iterated FunctionsMay 15th, 2009 Patrick Stein

If a function is pretty fun, why not do it again?

My post yesterday about the Mandelbrot set got me thinking again about iterated functions. With the Mandelbrot set, you start with some complex number $c$ and with $z_0 = 0$. Then, you generate $z_{i+1}$ by doing $z_{i+1} = z_i^2 + c$.

Here’s another way to write this iteration. Let $f_c(z) = z^2 + c$. Let $z_0 = 0$ and $z_1 = f_c(z_0)$ and $z_2 = f_c(z_1) = f_c(f_c(z_0)) = f_c^2(z_0)$. In general, we write $f_c^n(z)$ to mean $f_c(f_c(\ldots f_c(z)))$ when there are $n$ copies of $f_c$. It is easy to verify then that $f_c^{a}(f_c^{b}(z)) = f_c^{a+b}(x)$.

### The Jam

For that post, I generated the image linked at the right here. Each level of that image represents one iteration of $f_c^{i+1}(0) = f_c(f_c^i(0))$ with each strand made up of four different $f_c$ functions with neighboring $c$‘s.

One of the annoying things about that image is that I went directly from one iteration to the next. I had pondered using conventional splines or other polynomials to interpolate between iterations in an effort to smooth out the transitions. I didn’t go to the trouble for it though because any straightforward interpolation would be every bit as fake as the linear version. There is no provision in the iteration for $f_c^{\alpha}$ unless $\alpha$ is a non-negative integer.

Now the question is, can we fake one? Can we make some other function $g_c(z)$ so that $g_c^2(z) = f_c(z)$?

### Faking easy functions

Let’s start with an easier function. In fact, let’s start with the easiest function: $f(x) = 0$. It is obvious that $f^n(x) = f(x)$ for all positive integers $n$. As such, if we let $g(x) = 0$, it is trivially true that $g^2(x) = f(x)$. But, that was too easy.

Maybe $f(x) = x$ will be harder? No. Again, $f^2(x) = f(f(x)) = f(x)$.

### Faking translations

Let’s move up to something a little trickier. Let’s say that $f(x) = x + c$ for some constant $c$. Now, we see that $f^2(x) = f(f(x)) = f(x + c) = (x + c) + c = x + 2c$. What are we going to try for $g(x)$ then? Let’s try the obvious: $g(x) = x + \frac{c}{2}$. Hooray! $g^2(x) = f(x)$. So, it makes sense then to think of $f^{1/2}(x)$ as $x + \frac{c}{2}$. Similarly, we can think of $f^{1/n}(x)$ as $x + \frac{c}{n}$ for all positive integers $n$. In this way, we can make values $f^q(x)$ for any positive rational number $q$.

### Faking translations with scaling

Now, we’re cooking. Let’s up the ante. Let’s try $f(x) = ax + b$. What are we going to try for $g(x)$ then? Let’s guess we can still use roughly the same form: $g(x) = \alpha x + \beta$. Then, $g^2(x) = \alpha^2 x + \alpha \beta + \beta$. If we want $g^2(x) = f(x)$, then we need $\alpha^2 = a$ and $(\alpha + 1)\beta = b$. This means that $\alpha = \sqrt{a}$ and $\beta = \frac{b}{1 + \sqrt{a}}$. In particular, notice that if $a$ and $b$ are both real numbers with $a$ less than zero, then $\alpha$ is pure imaginary and $\beta$ is complex.

### TTFN

I have much more to say about iterated functions, but I will save some for the next iteration. Next time, I will start with that last case and calculate $f^q(x)$ for rational $q$.

## Finding Better PolynomialsMay 12th, 2009 Patrick Stein

Some time ago, I wrote a small domain-specific language for finding polynomials given their value or the value of their derivatives at particular points.

It occurred to me shortly after writing that code that I could easily extend it to include the value of its integral over a certain range. I didn’t get to tackling that right away, and that was a Good Thing. In the intervening time, it occurred to me that I could extend it to moments as well.

So, are you looking for a polynomial that is zero at both zero and one, has a derivative of zero at one, has a second derivative of negative one at one, whose integral from zero to one is one, and whose mean (first moment centered at zero) on the interval zero to one is one fourth?

(polynomial-to-string
(calculate-polynomial-subject-to
(value :at 0 :equals 0)
(value :at 1 :equals 0)
(derivative :at 1 :equals 0)
(nth-derivative 2 :at 1 :equals -1)
(integral :from 0 :to 1 :equals 1)
(mean :from 0 :to 1 :equals 1/4)))

Well, that would be $f(x) = \frac{149}{4}x - 163x^2 + 265x^3 - 190x^4 + \frac{203}{4}x^5$. (For some reason though, gnuplot thinks $f(x) = -1$, so no graph for you…)

Here is the source file.

Edit: I realized that gnuplot was off by one because it was truncating the fractions. So, I just changed the 4’s in the denominators to 4.0’s and Bob’s your uncle.

## Phony Physics (a.k.a. Fun with Interpolation)April 3rd, 2009 Patrick Stein

In a previous post, I mentioned looking for a polynomial for an application. I am working on an application that involves clicking or dragging tiles around. Once you release the tile, I want it to snap to where it’s supposed to be.

The easiest way to do this is to just warp it to where it’s supposed to go. This isn’t very visually pleasing. The next best thing is to set up a time-interval on which it moves into place. Then, linearly interpolate from where it is to where it’s going (here with $t$ normalized to range between zero and one):

$(1-t)x_0 + tx_1$

That’s much better than just warping, but it doesn’t have any sort of fade-in or fade-out. It instantaneously has its maximum velocity and then instantaneously stops at the end.

Typically, then one uses a cubic spline to scale the $t$ value before interpolating to get a speed up in the beginning and then slow down in the end.
Read the rest of this entry ⇒