Genetic Programming With clojure.spec

Published on:
Tags: All, Clojure

Clojure.spec is a new library for Clojure that enables you to write specifications for your program. In an earlier post, I showed off some of it’s power to generate test data from your specifications. It’s a pretty cool feature. Given some clojure.spec code, you can generate sample data for you based off of the specifications. But what if you could write a program that would generate your clojure.spec program based off of data so that you could generate more test data?

Genetic programming

Here is where we embark for fun. We are going to use genetic programming to generate clojure.spec creatures that contain a program. Through successive generations, those creatures will breed, mutate, and evolve to fit the data that we are going to give it. Going with our creature theme, we can say that it eats a sequence of data like this

1
["hi" true 5 10 "boo"]

Each creature will be represented by a map that has information about two key pieces, its program and the fitness score. Each program is going to start with a clojure.spec/cat, (which is the spec to describe a sequence). From here on out, I’m going to refer to the clojure.spec namespace as s/. So, a simple creature would look like this.

1
2
{:program (s/cat :0 int? :1 string?)
 :score 0}

How do we figure out a score from the creature’s spec? We run the spec and see how much of the data that it can successfully consume.

Scoring a creature

To score a creature, we’re going to use the clojure.spec explain-data function. It enables us to run a spec against some data and get back the problems in a data format that we can inspect. If there are no problems and the spec passes, the result is nil.

1
2
(s/explain-data (s/cat :0 int? :1 string?)  [1 "hi"])
;=> nil

However, if there is a problem, we can get information about what went wrong. In particular, we can see where it went wrong in the sequence.

1
2
(s/explain-data (s/cat :0 int? :1 string?)  [1 true])
;=> #:clojure.spec{:problems [{:path [:1], :pred string?, :val true, :via [], :in [1]}]}

In the above example, the :in key tells us that it fails at index 1. This gives us all the information we need to write a score function for our creature.

1
2
3
4
5
6
7
(defn score [creature test-data]
  (try
   (let [problems (:clojure.spec/problems (s/explain-data (eval (:program creature)) test-data))]
     (if problems
       (assoc creature :score (get-in problems [0 :in 0]))
       (assoc creature :score 100)))
   (catch Throwable e (assoc creature :score 0))))

This function tries to run the spec against the data. If there are no problems, the creature gets a 100 score. Otherwise, it records the farthest point in the sequence that it got. Creatures with a higher score are considered more fit.

1
2
(score {:program '(s/cat :0 int? :1 string?)} [1 true])
;=> {:program (s/cat :0 int? :1 string?), :score 1}

Now that we have a fitness function to evaluate our creatures, we need a way to generate a random clojure.spec creature.

Create a random creature

This is where I really love Clojure. Code is data, so we can create the programs as lists and they are just themselves. To run the programs, we just need to call eval on them. We are going to constrain the creatures somewhat. They are all going to start out with s/cat and have a certain length of items in the sequence. Also, we are going to allow the parts of the spec to be created with certain predicates.

1
(def preds ['integer? 'string? 'boolean? '(s/and integer? even?) '(s/and integer? odd?)])

Also allowing, composition with ands and ors and other sequences.

1
2
(def seqs ['s/+ 's/*])
(def and-ors ['s/and 's/or])

We are also going to have some probability knobs to control how the random creature is constructed.

1
2
3
4
(def seq-prob 0.3)
(def nest-prob 0.00)
(def max-depth 4)
(def and-or-prob 0.85)

The seq-prob is the probability that a new spec sub sequence will be constructed. The nest-prob is set to zero right now, to keep things simple, but if turned up with increase the chance that a nested spec sequence would occur. We are going to be writing a recursive function for generation, so we’ll keep things to a limited depth with max-depth. Finally, we have the chance that when constructing a spec sub sequence, that it will be an and/or with and-or-prob. Putting it all together with code to construct a random arg.

1
2
3
4
(defn make-random-arg [n]
  (if (and (pos? n) (< (rand) seq-prob))
    `~(make-random-seq n)
    `~(rand-nth preds)))

Also creating a random sub sequence.

1
2
3
4
5
6
7
8
9
10
11
(defn make-random-seq [n]
  (cond

    (< (rand) nest-prob)
    `(s/spec (~(rand-nth seqs) ~(make-random-arg (dec n))))

    (< (rand) and-or-prob)
    `(~(rand-nth and-ors) ~(make-random-arg (dec n)) ~(make-random-arg (dec n)))

    :else
    `(~(rand-nth seqs) ~(make-random-arg (dec n)))))

Finally, we can create a random s/cat spec with

1
2
3
4
5
6
7
(defn make-random-cat [len]
  (let [args (reduce (fn [r i]
                       (conj r (keyword (str i))
                             (make-random-arg max-depth)))
                     []
                     (range len))]
    `(s/cat ~@args)))

Let’s see it in action.

1
2
(make-random-cat 3)
;=> (clojure.spec/cat :0 (s/and integer? odd?) :1 integer? :2 boolean?)

We can make a batch of new creatures for our initial population using this function.

1
2
3
(defn initial-population [popsize max-cat-length]
  (for [i (range popsize)]
    {:program (make-random-cat (inc (rand-int max-cat-length)))}))

Great! Now we have a way to make new random spec creatures. But, we need a way to alter them and let them evolve. The first way to do this is with mutation.

Mutating a creature

Mutation in our case, means changing part of the code tree of the creature’s program. To keep the program runnable, we don’t want to be able to mutate every node, only specific ones. We’re going to control this by defining a mutable function that will only change nodes that start with our sequences or predicates.

1
2
3
4
(defn mutable? [node]
  (or (when (seq? node)
        (contains? (set/union (set seqs) #{'clojure.spec/spec}) (first node)))
      (contains? (set preds) node)))

Then, we can use postwalk to walk the code tree and alter a node by a mutation probability factor

1
2
3
4
5
6
7
8
9
(def mutate-prob 0.1)

(defn mutate [creature]
  (let [program (:program creature)
        mutated-program (walk/postwalk
                         (fn [x] (if (and (mutable? x) (< (rand) mutate-prob))
                                  (make-random-arg max-depth)
                                  x)) program)]
    (assoc creature :program mutated-program)))

Trying it on one of our creatures.

1
2
(mutate {:program '(clojure.spec/cat :0 (s/and integer? odd?) :1 integer?)})
;=> {:program (clojure.spec/cat :0 (s/or (s/and integer? even?)) :1 integer?)}

We can change our creatures via mutation, but what about breeding it with other creatures?

Crossovers with creatures

Crossover is another way to modify programs. It takes two creatures and swaps a node from one creature to another. To accomplish this, we’re going to use the walk function to select at a random probability the crossover node from the first node, then insert it into the second’s creatures program at another random spot.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
(def crossover-prob 0.7)

(defn crossover [creature1 creature2]
  (let [program1 (:program creature1)
        program2 (:program creature2)
        chosen-node (first (walk/walk
                            #(when
                                 (and (< (rand) crossover-prob)
                                      (mutable? %))
                               %)
                            #(remove nil? %) program1))
        crossed-over? (atom false)
        crossover-program (if chosen-node
                             (walk/postwalk
                              (fn [x]
                                (if (and (mutable? x)
                                         (< (rand) crossover-prob)
                                         (not @crossed-over?))
                                  (do (reset! crossed-over? true) chosen-node)
                                  x))
                              program2)
                             program2)]
    {:program crossover-program}))

Taking two creatures and putting them together.

1
2
3
(crossover {:program '(clojure.spec/cat :0 (s/and integer? odd?) :1 integer?)}
           {:program '(clojure.spec/cat :0 string? :1 boolean?)})
;=> {:program (clojure.spec/cat :0 (s/and integer? odd?) :1 boolean?)}

We have our ways to change our creatures to let them evolve and we have a way to rank them. What we need now is to put it together in a way that will let them evolve to the solution.

Evolving creatures

The process will be in general terms:

  • Create initial population
  • Rank them
  • Take the top two best ones and carry them over (this is known as elitism)
  • Create the next generation from by selecting creatures for crossover and mutation
  • Repeat!

So how do we select the best creatures for our next population? This is an interesting question, there are many approaches. The one that we’re going to use is called tournament selection. It involves picking n creatures from the whole population and then, among those, picking the best scored one. This will allow diversity in our population that is needed for proper evolution.

1
2
3
(defn select-best [creatures tournament-size]
  (let [selected (repeatedly tournament-size #(rand-nth creatures))]
    (-> (sort-by :score selected) reverse first)))

We’re now ready to write our evolve function. In it, we pass in the population size, how many generations we want, the tournament size, and of course, our test data that our creatures are going to feed on. The loop ends when it reaches a perfect fitting solution, (a creature with a score of 100), or the max generations.

Note that we have a chance for a completely random creature to appear in the generations, to further encourage diversity.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
(defn perfect-fit [creatures]
  (first (filter #(= 100 (:score %)) creatures)))

(defn evolve [pop-size max-gen tournament-size test-data]
  (loop [n max-gen
         creatures (initial-population pop-size (count test-data))]
    (println "generation " (- max-gen n))
    (let [scored-creatures (map (fn [creature] (score creature test-data)) creatures)]
     (if (or (zero? n) (perfect-fit scored-creatures))
       scored-creatures
       (let [elites (take 2 (reverse (sort-by :score scored-creatures)))
             new-creatures (for [i (range (- (count creatures) 2))]
                             ;; add a random node to improve diversity
                             (if (< (rand) new-node-prob)
                               {:program (make-random-cat (count test-data))}
                               (let [creature1 (select-best scored-creatures tournament-size)
                                     creature2 (select-best scored-creatures tournament-size)]
                                 (mutate (crossover creature1 creature2)))))]
         (println "best-scores" (map :score elites))
         (recur (dec n) (into new-creatures elites)))))))

Trying it out. We get a perfect clojure.spec creature!

1
2
3
4
(def creature-specs (evolve 100 100 7 ["hi" true 5 10 "boo"]))
  (perfect-fit creature-specs)
  ;=>{:program (clojure.spec/cat :0 string? :1 boolean? :2 (s/and integer? odd?) :3 integer? :4 string?)
      :score 100}

Of course, our clojure.spec creature can generate data on its own with the exercise function. Let’s have it generate 5 more examples of data that conform to its spec.

1
2
3
4
5
6
  (s/exercise (eval (:program (perfect-fit creature-specs))) 5)
;; ([("" true -1 -1 "") {:0 "", :1 true, :2 -1, :3 -1, :4 ""}]
;;  [("D" false -1 -1 "G") {:0 "D", :1 false, :2 -1, :3 -1, :4 "G"}]
;;  [("12" false -1 0 "l0") {:0 "12", :1 false, :2 -1, :3 0, :4 "l0"}]
;;  [("" false -1 -2 "") {:0 "", :1 false, :2 -1, :3 -2, :4 ""}]
;;  [("2" false 1 0 "Jro") {:0 "2", :1 false, :2 1, :3 0, :4 "Jro"}])

If we wanted to, we could adjust our evolve function and let it continue to evolve creatures and lots of different solutions to choose from. We could even take the generated data from the exercise function and let it generate more creatures who generate more data……

The mind boggles.

We’ll leave with a quick summary of Genetic Programming.

  • Start with a way to generate random creatures
  • Have a way to evaluate their fitness
  • Create a way to change them for the next generations using
    • Mutation
    • Crossover
  • Have an evolution process
    • Create an initial population
    • Rank them
    • Create the next generation using selection techniques and mutation/ crossovers
    • Don’t forget about diversity

Most importantly, have fun!

If you want to play with the code, it’s on github here https://github.com/gigasquid/genetic-programming-spec

If you want to learn more about clojure.spec this video is a great place to start. The guide is also a great reference with examples.

If you want to learn more about genetic programming, there are a couple of books I would recommend: Collective Intelligence and Genetic Algorithms + Data Structures = Evolution Programs

Hello World for the Next Generation

Published on:
Tags: All

I sit next to my daughter, showing her programming for the first time.

1
(+ 1 1)

“Now press enter.”

1
2

“Pretty cool, huh?”

She looks unimpressed. I fear I’m losing her. How can I explain that this is just a small tip of something so much bigger?

You can make the code sing to you.

You can take these numbers, turn them into notes, and line them up with the beat of your heart. Bring in the melody and chorus and build them up to a crescendo. Let it crash in waves and then

You can make the code dance for you.

You can create delicate swirls and patterns with mathematical expressions. Have them pulse to the music in a never ending prism of fractals, flexing your control with confidence because

You can make the code lift you up.

It doesn’t matter if you don’t look like them. It doesn’t matter if they think you don’t belong. They can’t hold you back. You’re smart and strong and

You can make the code create your life.

You can solve problems for people. Make things work better and faster. Keep the data flowing. Make a company for yourself. Watch that company and your power and influence in the world grow until nothing feels out of reach and then, if you’re not careful

You can make the code hard and cruel.

You can automate hate. Use the latest AI to keep them in control. Watch them with never sleeping eyes. Steal their money and point guns at them with armed robots. Then, late at night, you can think how

You can let the code control you.

You can forget the important things in life. Turn away from family and friends. Lose yourself in some self created digital representation of yourself that never feels smart enough and leaves you grasping for more. Until that day, when you walk the streets with a deadened heart and you see the sad faces all around and you remember that

You can let the code make them smile.

You can use your skills to brighten dark days. Use your programs to make them laugh. When you have their attention, inspire them to dream with you of a better world and next

You can make the code save lives.

You can turn those algorithms to heal. Dive in and join the battle against death and disease. Make sense of all the data. Then lift your head to the sky and

You can make the code reach the stars.

You can see the surface of Mars. Pick up a rock from a planet that was unimaginable generations before. Look out at what is beyond our solar system and peer into the mysteries of the beginning of time.

You can.

All these things are yours now. The terrible and beautiful power of it.

I reach down to type the code that distills my hopes and fears for the next generation.

1
(println "Hello World")

Then I slide the keyboard over to her, a tear sliding down my cheek, and lean over to whisper the only advice that I can form into words,

“Don’t forget the closing parens.”

Book Writing for the Busy Programmer

Published on:
Tags: All

So you want to write a book? Awesome. I’ve been working on one too for the last year.

No, it’s not really a programming book, but it does have code in it. It’s a sci-fi/fantasy book written for my ten year daughter, but this post isn’t about that. It’s about sharing the tools and setup that I’ve found work best for me.

Tools for Writing

If the first thing you think of when you want to write a book is creating some really cool tools to help you, I can totally relate. It’s a programmer thing.

Hold on though, there’s another way.

Starting out with only my book idea, I spent some time looking at the best authoring tools out there. I knew that I wanted to able to write in an editor that I was comfortable in and in a terse format like Markdown. I also wanted to be able to use git for revision management. After searching, I settled on Leanpub

Leanpub is a free service for authoring that has Git integration in Markdown format. With it, I was able to write in my favorite text editor, (Emacs of course), commit and push my changes to my git repo, and then generate PDF and e-book formats. The multiple formats were important to me because it allowed me to share my chapters and get feedback.

Tools for Feedback

Since I was writing a book with my daughter in mind, the most important feedback was from her. After every chapter was done. I would either print her out a copy or download it to her Kindle for review. She actually really enjoyed reading it on her Kindle because it made it for more real to her. My son also got interested in the story and before long, I had them both getting in heated debates about which direction the story should go.

After my kids reviewed the chapters, I also sought some professional writing advice from a free-lance editor. I highly recommend getting this sort of feedback from an editor, writing group, or trusted friend to help you grow and improve. The one catch is that most of the writing world works with Microsoft Word, so I needed to convert my chapters to that format.

From my experience, all PDF to Word converters are full of fail. The formatting goes all over the place and your writing ends up looking like some horrible abstract text art experiment gone wrong. So far, the best converter I’ve found is pandoc. It allows you to take your Markdown files and turn them into quite presentable Word documents.

If you have a Mac, it’s as simple as brew install pandoc. Then, you can create a simple script to convert all your chapters,(or a selection) into a properly formatted Word Doc.

1
2
3
4
#!/bin/bash
rm ./all.md
for i in `cat ./Book.txt`; do  cat $i >> all.md; echo "  " >> all.md ; done
pandoc -o all.docx -f markdown -t docx ./all.md

Once you write your manuscript, (what the publishing world calls your book text), revise it, copy edit it, and walk backwards in a circle three times, you’re ready to publish.

Tools for Publishing

I don’t have any real firm advice in this area yet since I’m still in the midst of it, but I’ll share the two options that I’m looking at – traditional publishing and self publishing.

Self publishing is more easily understood of the two. You can put your book up for sale at any time through Leanpub or Amazon. For better or worse, you have complete control of the content, display, marketing, and revenue of your book.

Traditional publishing involves finding an literary agent and/or publisher to work with. This route involves pitching your manuscript to someone to represent it through a query. The advantages of this are that, (if you find a good match), you will have a team of people helping you make your book the best it can be and have the possibility of getting it on the shelf in a bookstore. One of the downsides is that the traditional publishing world takes a lot longer than pushing the self publish button.

With any luck, I’ll have a clearer picture of this all in a bit and be able to share my experiences. In the meantime, I encourage you to grab your keyboard and bring your book ideas to life.

No matter the outcome, it’s a rewarding journey.

One Fish Spec Fish

Published on:
Tags: All, Clojure

Clojure.spec is an exciting, new core library for Clojure. It enables pragmatic specifications for functions and brings a new level of robustness to building software in Clojure, along with unexpected side benefits. One of which is the ability to write specifications that generate Dr. Seuss inspired rhymes.

In this blog post, we’ll take a tour of writing specifications for a clojure function, as well as the power of data generation. First, some inspirational words:

1
2
3
4
One fish
Two fish
Red fish
Blue fish

The mere shape of these words brings a function to mind. One that would take in a vector:

1
[1 2 "Red" "Blue"]

and give us back a string of transformed items with the word fish added, of course.

But, let us turn our attention the parameters of this function and see how we can further specify them. Before we get started, make sure you use the latest version of clojure, currently [org.clojure/clojure "1.9.0-alpha3"], test.check [org.clojure/test.check "0.9.0"], and add clojure.spec to your namespace.

1
2
(ns one-fish.core
  (:require [clojure.spec :as s]))

Specifying the values of the parameters

Back to the parameters. The first two are integers, that’s pretty easy, but we want to say more about them. For example, we don’t want them to be very big. Having a child’s poem with the One Hundred Thousand and Thirty Three fish really won’t do. In fact, what we really want is to say is there is finite notion of fish-numbers and it’s a map of integer to string representation.

1
2
3
(def fish-numbers {0 "Zero"
                   1 "One"
                   2 "Two"})

Then, we can use the s/def to register the spec we are going to define for global reuse. We’ll use a namespaced keyword ::fish-number to express that our specification for a valid number is the keys of the fish-numbers map.

1
(s/def ::fish-number (set (keys fish-numbers)))

Now that we have the specification, we can ask it if it’s valid for a given value.

1
2
(s/valid? ::fish-number 1) ;=> true
(s/valid? ::fish-number 5) ;=> false

So 5 is not a valid number for us. We can ask it to explain why not.

1
2
(s/explain ::fish-number 5)
;; val: 5 fails predicate: (set (keys fish-numbers))

Which, of course, totally makes sense because 5 is not in our fish-numbers map. Now that we’ve covered the numbers, let’s look at the colors. We’ll use a finite set of colors for our specification. In addition to the classic red and blue, we’ll also add the color dun.

1
(s/def ::color #{"Red" "Blue" "Dun"})

You may be asking yourself, “Is dun really a color?”. The author can assure you that it is in fact a real color, like a dun colored horse. Furthermore, the word has the very important characteristic of rhyming with number one, which the author spent way too much time trying to think of.

Specifying the sequences of the values

We’re at the point where we can start specifying things about the sequence of values in the parameter vector. We’ll have two numbers followed by two colors. Using the s/cat, which is a concatentation of predicates/patterns, we can specify it as the ::first-line

1
(s/def ::first-line (s/cat :n1 ::fish-number :n2 ::fish-number :c1 ::color :c2 ::color))

What the spec is doing here is associating each part with a tag, to identify what was matched or not, and its predicate/pattern. So, if we try to explain a failing spec, it will tell us where it went wrong.

1
2
3
(s/explain ::first-line  [1 2 "Red" "Black"])
;; In: [3] val: "Black" fails spec: :one-fish.core/color
;;   at: [:c2] predicate: #{"Blue" "Dun" "Red"}

That’s great, but there’s more we can express about the sequence of values. For example, the second number should be one bigger than the first number. The input to the function is going to be the map of the destructured tag keys from the ::first-line

1
2
(defn one-bigger? [{:keys [n1 n2]}]
  (= n2 (inc n1)))

Also, the colors should not be the same value. We can add these additional specifications with s/and.

1
2
3
(s/def ::first-line (s/and (s/cat :n1 ::fish-number :n2 ::fish-number :c1 ::color :c2 ::color)
                           one-bigger?
                           #(not= (:c1 %) (:c2 %))))

We can test if our data is valid.

1
(s/valid? ::first-line [1 2 "Red" "Blue"]) ;=> true

If we want to get the destructured, conformed values, we can use s/conform. It will return the tags along with the values.

1
2
(s/conform ::first-line [1 2 "Red" "Blue"])
;=> {:n1 1, :n2 2, :c1 "Red", :c2 "Blue"}

Failing values for the specification can be easily identified.

1
2
3
4
(s/valid? ::first-line [2 1 "Red" "Blue"]) ;=> false
(s/explain ::first-line [2 1 "Red" "Blue"])
;; val: {:n1 2, :n2 1, :c1 "Red", :c2 "Blue"}
;;   fails predicate: one-bigger?

With our specifications for both the values and the sequences of values in hand, we can now use the power of data generation to actually create data.

Generating test data – and poetry with specification

The s/exercise function will generate data for your specifications. It does 10 items by default, but we can tell it to do only 5. Let’s see what it comes up with.

1
2
3
4
5
6
(s/exercise ::first-line 5)
;; ([(0 1 "Dun" "Red") {:n1 0, :n2 1, :c1 "Dun", :c2 "Red"}]
;;  [(0 1 "Blue" "Red") {:n1 0, :n2 1, :c1 "Blue", :c2 "Red"}]
;;  [(0 1 "Blue" "Dun") {:n1 0, :n2 1, :c1 "Blue", :c2 "Dun"}]
;;  [(1 2 "Blue" "Dun") {:n1 1, :n2 2, :c1 "Blue", :c2 "Dun"}]
;;  [(1 2 "Dun" "Red") {:n1 1, :n2 2, :c1 "Dun", :c2 "Red"}])

Hmmm… something’s not quite right. Looking at the first result [0 1 "Dun" Red"], it would result in:

1
2
3
4
Zero fish
One fish
Dun fish
Red fish

Although, it meets our criteria, it’s missing one essential ingredient – rhyming!

Let’s fix this by adding an extra predicate number-rhymes-with-color?.

1
2
3
4
(defn fish-number-rhymes-with-color? [{n :n2 c :c2}]
  (or
   (= [n c] [2 "Blue"])
   (= [n c] [1 "Dun"])))

We’ll add this to our definition of ::first-line, stating that the second number parameter should rhyme with the second color parameter.

1
2
3
4
5
6
7
8
9
(s/def ::first-line (s/and (s/cat :n1 ::fish-number :n2 ::fish-number :c1 ::color :c2 ::color)
                           one-bigger?
                           #(not= (:c1 %) (:c2 %))
                           fish-number-rhymes-with-color?))

(s/valid? ::first-line [1 2 "Red" "Blue"]) ;=> true
(s/explain ::first-line [1 2 "Red" "Dun"])
;; val: {:n1 1, :n2 2, :c1 "Red", :c2 "Dun"}
;;   fails predicate: fish-number-rhymes-with-color?

Now, let’s try the data generation again.

1
2
3
4
5
6
7
8
9
10
11
(s/exercise ::first-line)
;; ([(1 2 "Red" "Blue") {:n1 1, :n2 2, :c1 "Red", :c2 "Blue"}]
;;  [(1 2 "Red" "Blue") {:n1 1, :n2 2, :c1 "Red", :c2 "Blue"}]
;;  [(0 1 "Blue" "Dun") {:n1 0, :n2 1, :c1 "Blue", :c2 "Dun"}]
;;  [(1 2 "Dun" "Blue") {:n1 1, :n2 2, :c1 "Dun", :c2 "Blue"}]
;;  [(1 2 "Dun" "Blue") {:n1 1, :n2 2, :c1 "Dun", :c2 "Blue"}]
;;  [(0 1 "Blue" "Dun") {:n1 0, :n2 1, :c1 "Blue", :c2 "Dun"}]
;;  [(1 2 "Red" "Blue") {:n1 1, :n2 2, :c1 "Red", :c2 "Blue"}]
;;  [(0 1 "Red" "Dun") {:n1 0, :n2 1, :c1 "Red", :c2 "Dun"}]
;;  [(0 1 "Red" "Dun") {:n1 0, :n2 1, :c1 "Red", :c2 "Dun"}]
;;  [(0 1 "Blue" "Dun") {:n1 0, :n2 1, :c1 "Blue", :c2 "Dun"}])

Much better. To finish things off, let’s finally create a function to create a string for our mini-poem from our data. While we’re at it, we can use our spec with s/fdef, to validate that the parameters are indeed in the form of ::first-line.

Using spec with functions

Here’s our function fish-line that takes in our values as a parameters.

1
2
3
4
5
6
7
(defn fish-line [n1 n2 c1 c2]
  (clojure.string/join " "
    (map #(str % " fish.")
      [(get fish-numbers n1)
       (get fish-numbers n2)
       c1
       c2])))

We can specify that the args for this function be validated with ::first-line and the return value is a string.

1
2
3
(s/fdef fish-line
        :args ::first-line
        :ret  string?)

Now, we turn on the instrumentation of the validation for functions and see what happens.

1
2
3
4
(s/instrument #'fish-line)

(fish-line 1 2 "Red" "Blue")
;-> "One fish. Two fish. Red fish. Blue fish."

But what about with bad data?

1
2
3
4
5
6
7
8
9
10
11
12
13
(fish-line 2 1 "Red" "Blue")

 ;; Call to #'one-fish.core/fish-line did not conform to spec: val:
 ;;   {:n1 2, :n2 1, :c1 "Red", :c2 "Blue"} fails at: [:args] predicate:
 ;;   one-bigger? :clojure.spec/args (2 1 "Red" "Blue")

 ;;   {:clojure.spec/problems
 ;;    {[:args]
 ;;     {:pred one-bigger?,
 ;;      :val {:n1 2, :n2 1, :c1 "Red", :c2 "Blue"},
 ;;      :via [],
 ;;      :in []}},
 ;;    :clojure.spec/args (2 1 "Red" "Blue")}

Ah, yes – the first number must be one smaller than the second number.

Wrap up

I hope you’ve enjoyed this brief tour of clojure.spec. If you’re interested in learning more, you should check out the spec.guide. It really is an exciting, new feature to Clojure.

In the meantime, I’ll leave you with one of our generated lines, sure to be a big hit with future generations.

1
2
3
4
Zero fish
One fish
Red fish
Dun fish

Kolmogorov-Uspensky Machine

Published on:
Tags: All, Clojure

It happened again. I was sitting down reading a paper and I came across the phrase Kolmogorov-Uspensky machine and I had no idea what it was. My initial reaction was just to move on. It probably wasn’t important, I told myself, just a detail that I could skim over. I took a sip of my tea and continued on. The next paragraph it appeared again. It was just sticking up like a thread waiting to be pulled. Still, I resisted. After all, I wasn’t even near my computer. I would have to get up an walk into the other room. After considering it for a moment, inertia won out and I continued my reading. There it was once more. This time right in the same paragraph, silently mocking me. I knew I had to do something so I strode to my computer and pulled the thread.

What is a Kolmogorov-Uspensky machine?

The first thing I found is that the Kolmogorov-Uspensky machine, (also referred to as KUM), is very similar to the Turing machine. In fact, it shares the same computational class of being Turing-complete.

The Turing machine operates on a tape divided into cells. The head can move along the tape and read and write to it. The tape is the storage for the machine. Its movements are controlled by a collection of instructions which will be executed if certain prerequisites are met. The difference between the Turing machine and a Kolmogorov-Uspensky machine is that the KUM has a tape that can change topology. It’s a graph.

The graph of a KUM machine is not just any graph. It’s a particular kind of graph. It must have the property that if you start at a one vertex, all the other vertexes are uniquely addressable. The graph also has a active node which, in turn, has a active neighborhood of other nodes associated with it. This is not unlike the Turing machine’s head that points to the current cell. To figure out what do, the KUM machine looks at the graph to decide on what instruction to execute. The instructions can consist of adding a node or edge to the active neighborhood, removing a node or edge from the active neighborhood, or halting.

After spending some time reading and researching, I felt like a had some idea of what a Kolmogorov-Uspensky machine was but it was still a bit fuzzy. I wanted to really dig in and experience it by trying to implement one. I found an esoteric programming language called Eodermdrome that fit the bill and set to work building it out in Clojure.

Eodermdrome

An Eodermdrome program creates graphs from a string of letters. For example the graph of abcdae would produce

The program itself consists of series of commands or rules. The command will be executed if the following prereqs are met:

  • The match graph in the command is a subgraph of the system graph.
  • If an input set is part of the command, the input character read of the system input must match it.

A command is of the form:

  • match-graph graph-replacement
    • This will execute if the match-graph is a subgraph and then transform the match to the replacement.
    • Example:a abc
  • (input-set) match-graph graph-replacement.
    • This will execute if the match is a subgraph and if the next character of the system input matches. On executing, it will read one char from the input and then transform the match graph with the replacement.
    • Example: (1) a abc
  • match-graph (output) graph-replacement.
    • This will execute if the match-graph is a subgraph. On executing, it will print the output to the system and transform the match with the replacement.
    • Example: a (1) abc
  • (input-set) match-graph (output) graph-replacement.
    • This will execute if the match is a subgraph and if the next character of the system input matches. On executing, it will read one char from the input, print the output to the system, and then transform the match graph with the replacement.
    • Example: (0) a (1) abc

Comments are also allowed as text in between commas. In this implementation, they must be contained on a single line. Example: ,this is a comment,(1) a abc

The initial state of the graph with a program is the denoted by the graph string thequickbrownfoxjumpsoverthelazydog.

We now have all we need to walk through an example program in the Kolmogorov-Uspensky machine.

An add program

Let’s take program for adding two string of ones together separated by zeros.

1
2
3
4
,takes input of ones separated by zeros and adds the ones, thequickbrownfoxjumpsoverthelazydog a
(1) a ab
(0) a a
ab (1) a

Given a system input of “101”, it will print out “11”. Let’s walk through what happens in the program.

Step 1 – The program starts with our graph in the initial state of our beloved thequickbrownfoxjumpsoverthelazydog configuration.

Step 2 – The first instruction matches ,takes input of ones separated by zeros and adds the ones, thequickbrownfoxjumpsoverthelazydog a with he active subgraph being the whole graph. It is replaced by the single graph node a.

Step 3 – The next instruction set (1) a ab a subgraph matches and takes a 1 off the input and transforms the graph to ab.

Step 4 – The instruction set (0) a a also matches (since a is a subgraph of ab) and it takes a zero off the input and transforms back the a to a so the graph is still ab.

Step 5 – The instruction set ab (1) a now matches and a one prints out and the ab graph changes to a.

Step 6 – Now, the (1) a ab instruction matches, it takes another 1 off the input (our last one) and transforms to ab

Step 7 – Finally, ab (1) a matches and it prints out a 1 and rewrites the graph to back to a

There are no more matching subgraphs without input required for instructions, so the program ends.

Parting thoughts and threads

The Kolmogorov-Uspensky machine is quite interesting. Not only is the idea of graph storage and rewriting appealing, it is also pretty powerful compared to Turing machines. In fact, Dima Grigoriev proved that Turing machines cannot simulate Kolmogorov machines in real time.

It’s been quite a fun and enriching jaunt. The next time you see an unfamiliar term or concept, I encourage you to pull the thread. You never know where it will take you. It’s a great way to enlarge your world.

If you are interested in the code and hacking for yourself, here is the repo https://github.com/gigasquid/eodermdrome.

Other good resources on KUM:

Fairy Tale Word Vectors

Published on:
Tags: All, Clojure

This post continues our exploration from the last blog post Why Hyperdimensional Socks Never Match. We are still working our way through Kanerva’s paper. This time, with the basics of hypervectors under our belts, we’re ready to explore how words can be expressed as context vectors. Once in a high dimensional form, you can compare two words to see how similar they are and even perform reasoning.

To kick off our word vector adventure, we need some words. Preferring whimsy over the Google news, our text will be taken from ten freely available fairy tale books on http://www.gutenberg.org/.

Gather ye nouns

Our goal is to assemble a frequency matrix, with all the different nouns as the rows and the columns will be the counts of if the word appears or not in the document. Our matrix will be binary with just 1s and 0s. The document will be a sentence or fragment of words. A small visualization is below.

Noun Doc1 Doc2
flower 1 0
king 0 1
fairy 1 0
gold 1 1

The size of the matrix will be big enough to support hypervector behavior, but not so big as to make computation too annoyingly slow. It will be nouns x 10,000.

The first task is to get a set of nouns to fill out the rows. Although, there are numerous online sources for linguistic nouns, they unfortunately do not cover the same language spectrum as old fairy tale books. So we are going to collect our own. Using Stanford CoreNLP, we can collect a set of nouns using Grimm’s Book as a guide. There are about 2500 nouns there to give us a nice sample to play with. This makes our total matrix size ~ 2500 x 10,000.

Now that we have our nouns, let’s get down to business. We want to create an index to row to make a noun-idx and then create a sparse matrix for our word frequency matrix.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
(ns hyperdimensional-playground.context-vectors
  (:require [clojure.core.matrix :as m]
            [clojure.core.matrix.linear :as ml]
            [clojure.string :as string]
            [hyperdimensional-playground.core :refer [rand-hv cosine-sim mean-add inverse xor-mul]]
            [hyperdimensional-playground.fairytale-nouns :refer [fairy-tales-nouns book-list]]))

(m/set-current-implementation :vectorz)
;; size of the hypervectors and freq matrix columns
(def sz 10000)
;; The nouns come from a sampling of Grimm's fairy tale nouns these will
;; make up the rows in the frequency matrix
(def noun-idx (zipmap fairy-tales-nouns (range)))
(def freq-matrix (m/new-sparse-array [(count fairy-tales-nouns) sz]))

The next thing we need to do is to have some functions to take a book, read it in, split it into documents and then update the frequency matrix.

Random indexing for the win

The interesting thing about the update method is that we can use random indexing. We don’t need to worry about having a column for each document. Because of the nature of hyperdimensions, we can randomly assign 10 columns for each document.

1
2
3
4
5
6
7
(defn update-doc!
  "Given a document - upate the frequency matrix using random indexing"
  [doc]
  (let [known-nouns (clojure.set/intersection fairy-tales-nouns (set doc))]
    ; use positive random indexing
    (doall (repeatedly 10 #(doseq [noun known-nouns]
                       (m/mset! freq-matrix (get noun-idx noun) (rand-int sz) 1))))))

The whole book is processed by slurping in the contents and using a regex to split it up into docs to update the matrix.

1
2
3
4
5
6
7
8
9
10
11
12
13
(defn process-book
  "Load a book and break it into sentence like documents and update the frequency matrix"
  [book-str]
  (let [book-text (slurp book-str)
        docs (partition 25 (map string/lower-case
                                (string/split book-text #"\s|\.|\,|\;|\!|\?")))
        doc-count (count docs)]
    (println "processing:" book-str "(with" doc-count "docs)")
    (doall (map-indexed (fn [idx doc]
                          (when (zero? (mod idx 1000)) (println "doc:" idx))
                          (update-doc! doc))
                        docs))
    (println "DONE with " book-str)))

We can now run the whole processing with:

1
2
(doseq [book book-list]
    (process-book book))

On my system, it only takes about 3 seconds.

Great! Now we have hypervectors associated with word frequencies. They are now context word vectors. What can we do with them.

How close is a king to a goat?

One of the things that we can do with them is find out a measure of how closely related the context of two words are by a measure of their cosine similarity. First, we need a handy function to turn a string word into a word vector by getting it out of our frequency matrix.

1
2
3
4
5
(defn wv [word]
  "Get a hypervector for the word from the frequency matrix"
  (let [i (get noun-idx word)]
    (assert (not (nil? i)) (str word " not found"))
    (m/slice freq-matrix i)))

Then we can make another nice function to compare two words and give a informational map back.

1
2
3
4
5
6
7
8
9
(defn compare-wvs
  "Compare two words and give the cosine distance info map"
  [word1 word2]
  (let [wv1 (wv word1)
        wv2 (wv word2)]
    (when (not= word1 word2)
      {:word1 word1
       :word2 word2
       :cosine (cosine-sim wv1 wv2)})))

Let’s take a look at the similarities of some words to king.

1
2
3
4
5
6
7
8
9
10
(sort-by :cosine[(compare-wvs "king" "queen")
                 (compare-wvs "king" "prince")
                 (compare-wvs "king" "princess")
                 (compare-wvs "king" "guard")
                 (compare-wvs "king" "goat")])
  ;; ({:word1 "king", :word2 "goat", :cosine 0.1509151478896664}
  ;;  {:word1 "king", :word2 "guard", :cosine 0.16098893367403827}
  ;;  {:word1 "king", :word2 "queen", :cosine 0.49470535530616655}
  ;;  {:word1 "king", :word2 "prince", :cosine 0.5832521795716931}
  ;;  {:word1 "king", :word2 "princess", :cosine 0.5836922474743367})

As expected, the royal family is closer to the king then a guard or goat is.

One of the interesting things is that now we can do addition and subtraction with these word vectors and see how it affects the relation with other words.

Boy + Gold = King, Boy + Giant = Jack

We can take a look at how close boy and king are together by themselves.

1
(cosine-sim (wv "boy") (wv "king")) ;=> 0.42996397142253145

Now we can add some gold to the boy and that new word vector will be closer to king than boy was alone.

1
2
(cosine-sim (mean-add (wv "boy") (wv "gold"))
              (wv "king")) ;=> 0.5876251031366048

Doing the same for boy and jack, we find that adding a giant moves the context closer.

1
2
3
4
(cosine-sim (wv "boy") (wv "jack")) ;=> 0.33102858702785953
;; boy + giant = jack
(cosine-sim (mean-add (wv "giant") (wv "boy"))
              (wv "jack")) ;=>0.4491473187787431

Amusingly, a frog and a princess make a prince.

1
2
;;; frog + princess = prince
  (cosine-sim (wv-add "frog" "princess") (wv "prince")) ;=> 0.5231641991974249

We can take this even farther by subtracting words and adding others. For example a similarity to the word queen can be obtained by subtracting man from king and adding woman.

1
2
3
;;; queen= (king-man) + woman
  (cosine-sim (wv "queen")
              (mean-add (wv "woman") (wv-subtract "king" "man"))) ;=>0.5659832204544486

Similarly, a contextual closeness to father can be gotten from subtracting woman from mother and adding man.

1
2
(cosine-sim (wv "father")
              (mean-add (wv "man") (wv-subtract "mother" "woman"))) ;=>0.5959841177719538

But wait, that’s not all. We can also do express facts with these word vectors and reason about them.

Reasoning with word vector with the database as a hyperdimensional value

The curious nature of hypervectors allows the storage of multiple entity, attributes in it and allow the retrieval of the likeness of them later by simple linear math – using only xor multiplication and addition. This gives us the database as a value in the form of a high dimensional vector.

For an example, say we want to express the fact that Hansel is a brother of Gretel. We can do this by adding the xor product of brother with hansel and the product of brother with Gretel.

1
2
3
4
5
6
;; hansel is the brother of gretel
;; B*H + B*G
(def hansel-brother-of-gretel
   (mean-add
    (xor-mul (wv "brother") (wv "hansel"))
    (xor-mul (wv "brother") (wv "gretel"))))

Also we can express that Jack is a brother of Hansel.

1
2
3
4
(def jack-brother-of-hansel
  (mean-add
     (xor-mul (wv "brother") (wv "jack"))
     (xor-mul (wv "brother") (wv "hansel"))))

We can add these two facts together to make a new hypervector value.

1
2
(def facts (mean-add hansel-brother-of-gretel
                       jack-brother-of-hansel))

Now we can actually reason about them and ask questions. Is Jack a brother of Hansel? With a high cosine similarity, we can assume the answer is likely.

1
2
3
4
5
 ;; is jack the brother of hansel?
  (cosine-sim
   (wv "jack")
   (xor-mul (mean-add (wv "brother") (wv "gretel"))
            facts)) ;=>0.8095270629815969

What about someone unrelated. Is Cinderella the brother of Gretel? – No

1
2
3
4
5
 ;; is cinderella the brother of gretel ?
  (cosine-sim
   (wv "cinderella")
   (xor-mul (mean-add (wv "brother") (wv "gretel"))
            facts)) ;=>0.1451799916656951

Is Jack the brother of Gretel – Yes

1
2
3
4
5
 ;; is jack the brother of gretel ?
  (cosine-sim
   (wv "jack")
   (xor-mul (mean-add (wv "brother") (wv "gretel"))
            facts)) ;=> 0.8095270629815969

We can take this further by adding more facts and inventing a relation of our own.

Siblings in Hyperspace

Let’s invent a new word vector that is not in our nouns – siblings. We are going to create new random hypervector to represent it.

1
(def siblings (rand-hv))

We will define it in terms of word vectors that we already have. That is, siblings will be a the sum of brother + sister. We XOR multiply it by siblings to associate it with the hypervector.

1
2
(def siblings-brother-sister
    (mean-add (xor-mul siblings (wv "brother")) (xor-mul siblings (wv "sister"))))

Now we can add some more facts. Gretel is a sister of Hansel.

1
2
3
4
5
6
 ;; gretel is the sister of hansel
  ;; S*G + S*H
  (def gretel-sister-of-hansel
    (mean-add
     (xor-mul (wv "sister") (wv "gretel"))
     (xor-mul (wv "sister") (wv "hansel"))))

Gretel is also a sister of Jack.

1
2
3
4
5
6
  ;; gretel is the sister of jack
  ; S*G + S*H
  (def gretel-sister-of-jack
    (mean-add
     (xor-mul (wv "sister") (wv "gretel"))
     (xor-mul (wv "sister") (wv "jack"))))

Collecting all of our facts into one hypervector (as a database).

1
2
3
4
5
 (def facts (mean-add hansel-brother-of-gretel
                       jack-brother-of-hansel
                       gretel-sister-of-jack
                       gretel-sister-of-hansel
                       siblings-brother-sister))

Now we can ask some for questions.

Are Hansel and Gretel siblings? – Yes

1
2
3
4
;; are hansel and gretel siblings?
  (cosine-sim
   (mean-add (wv "hansel") (wv "gretel"))
   (xor-mul siblings facts)) ;=>0.627015379034067

Are John and Roland siblings – No

1
2
3
4
;; are john and roland siblings?
  (cosine-sim
   (mean-add (wv "roland") (wv "john"))
   (xor-mul siblings facts)) ;=> 0.1984017637065277

Are Jack and Hansel siblings? – Yes

1
2
3
  (cosine-sim
    (mean-add (wv "jack") (wv "hansel"))
    (xor-mul siblings facts)) ;=>0.48003572523507465

It is interesting to think of that nothing is stopping us at this point from retracting facts by simply subtracting the fact encoded word vectors from our “database” value and making a new value from it.

Conclusions

In this fun, but casual exploration of word vector we have seen the potential for reasoning about language in a way that uses nothing more complicated than addition and multiplication. The ability to store dense information in hypervectors, extract it with simple methods, and flexibly collect it randomly, shows its versatility and power. Hyperdimensional vectors might hold the key to unlocking a deeper understanding of cognitive computing or perhaps even true artificial intelligence.

It is interesting to note that this technique is not limited to words. Other applications can be done the same way. For example a video recommendation using a hypervector with movie titles. Or perhaps even anomaly detection using sensor readings over a regular weekly time period.

Looking over our journey with word vectors. At the beginning it seemed that word vectors were magical. Now, after an understanding of the basics, it still seems like magic.

If you are interested in exploring further, feel free to use my github hyperdimensional-playground as a starting point.

Why Hyperdimensional Socks Never Match

Published on:
Tags: All, Clojure

The nature of computing in hyperdimensions is a strange and wonderful place. I have only started to scratch the surface by reading a paper by Kanerva. Not only is it interesting from a computer science standpoint, it’s also interesting from a cognitive science point of view. In fact, it could hold the key to better model AI and general reasoning. This blog is a casual stroll through some of the main points of Kanerva’s paper along with examples in Clojure to make it tangible. First things first, what is a hyperdimension?

What is a Hyperdimension and Where Are My Socks?

When we are talking about hyperdimensions, we are really talking about lots of dimensions. A vector has dimensions. A regular vector could have three dimensions [0 1 1], but a hyperdimensional vector has tons more, like 10,000 or 100,000. We call these big vectors hypervectors for short, which makes them sound really cool. Although the vectors could be made up of anything, we are going to use vectors made up of zeros and ones. To handle big computing with big vectors in a reasonable amount of time, we are also going to use sparse vectors. What makes them sparse is that most of the space is empty, (zeros). In fact, Clojure has a nice library to handle these sparse vectors. The core.matrix project from Mike Anderson is what we will use in our examples. Let’s go ahead and make some random hypervectors.

First we import the core.matrix libraries and set the implementation to vectorz which provides fast double precision vector math.

1
2
3
4
5
(ns hyperdimensional-playground.core
  (:require [clojure.core.matrix :as m]
            [clojure.core.matrix.linear :as ml]))

(m/set-current-implementation :vectorz)

Next we set the sz of our hypervectors to be 100,000. We also create a function to generate a random sparse hypervector by choosing to put ones in about 10% of the space.

1
2
3
4
5
6
7
8
(def sz 100000)

(defn rand-hv []
  (let [hv (m/new-sparse-array [sz])
        n (* 0.1 sz)]
    (dotimes [i n]
      (m/mset! hv (rand-int sz) 1))
    hv))

Now we can generate some.

1
2
3
4
5
6
(def a (rand-hv))
(def b (rand-hv))
(def c (rand-hv))
(def d (rand-hv))

a ;=> #vectorz/vector Large vector with shape: [100000]

You can think of each of this hypervectors as random hyperdimensional sock, or hypersock, because that sounds cooler. These hypersocks, have curious properties. One of which is that they will ~never match.

Hypersocks never match

Because we are dealing with huge amount of dimensions, a mathematically peculiar probability distribution occurs. We can take a random hypervector to represent something, then take another one and they will different from each by about 100 STD. We can take another one and it too, will be 100 STD from the other ones. For practical purposes, we will run out of time before we will run of vectors that are unrelated. Because of this, any two hypersocks will never match each other.

How can we tell how similar two hypersocks are? The cosine to tell the similarity between two vectors. This is determined by the dot product. We can construct a cosine similarity function to give us a value from -1 to 1 to measure how alike they are with 1 being the same and -1 being the complete opposite.

1
2
3
(defn cosine-sim [v1 v2]
  (/ (m/dot v1 v2)
     (* (ml/norm v1) (ml/norm v2))))

If we look at the similarity of a hypervector with itself, the result is ~1. With the other random hypervectors, it is ~0.

1
2
3
4
5
6
(cosine-sim a a) ;=>  1.0
(cosine-sim d d) ;=> 1.0

(cosine-sim a b) ;=>  0.0859992468320239
(cosine-sim b c) ;=> 0.09329186588790261
(cosine-sim a c) ;=> 0.08782018973001954

There are other cool things we can do with hypervectors, like do math with them.

The Hamming Distance of Two Hypersocks

We can add hypervectors together with a sum mean vector. We add the vector of 1s and 0s together then we divide the resulting vector by the number of total vectors. Finally, to get back to our 1s and 0s, we round the result.

1
2
3
(defn mean-add [& hvs]
  (m/emap #(Math/round (double %))
   (m/div (apply m/add hvs) (count hvs))))

The interesting thing about addition is that the result is similar to all the vectors in it. For example, if we add a and b together to make x, x = a + b, then x will be similar to a and similar to b.

1
2
3
4
;; x = a + b
(def x (mean-add a b))
(cosine-sim x a) ;=> 0.7234734658023224
(cosine-sim x b) ;=> 0.7252586504505658

You can also do a very simple form of multiplication on vectors with 1s and 0s with using XOR. We can do this by add the two vectors together and then mapping mod 2 on each of the elements.

1
2
3
(defn xor-mul [v1 v2]
  (->> (m/add v1 v2)
      (m/emap #(mod % 2))))

We can actually use this xor-mul to calculate the Hamming distance, which is an important measure of error detection. The Hamming distance is simply the sum of all of the xor multiplied elements.

1
2
3
4
5
6
(defn hamming-dist [v1 v2]
  (m/esum (xor-mul v1 v2)))

(hamming-dist [1 0 1] [1 0 1]) ;=> 0
(hamming-dist [1 0 1 1 1 0 1] [1 0 0 1 0 0 1]) ;=> 2
(hamming-dist a a) ;=> 0

This illustrates a point that xor multiplication randomizes the hypervector, but preserves the distance. In the following example, we xor multiply two random hypervectors by another and the hamming distance stays the same.

1
2
3
4
5
6
7
8
9
10
11
12
; xa = x * a
; ya = y * a
; hamming distance of xa is the same as ya


;; multiplication randomizes but preserves the distance
(def x (rand-hv))
(def y (rand-hv))
(def xa (xor-mul x a))
(def ya (xor-mul y a))
(hamming-dist xa ya) ;=> 1740.0
(hamming-dist x y) ;=> 1740.0

So you can xor multiply your two hypersocks and move them to a different point in hyperspace, but they will still be the same distance apart.

Another great party trick in hyperspace, is the ability to bind and unbind hypervectors for use as map like pairs.

Using Hypervectors to Represent Maps

A map of pairs is a very important data structure. It gives the ability to bind symbols to values and then retrieve those values. We can do this with hypervectors too. Consider the following structure:

1
2
3
{:name "Gigasquid"
 :cute-animal "duck"
 :favorite-sock "red plaid"}

We can now create hypervectors to represent each of these values. Then we can xor the hypervector symbol to the hypervector value and sum them up.

1
2
3
4
5
6
7
8
9
10
;; data records with bound pairs
(def x (rand-hv)) ;; favorite-sock
(def y (rand-hv)) ;; cute-animal
(def z (rand-hv)) ;; name
(def a (rand-hv)) ;; red-plaid
(def b (rand-hv)) ;; duck
(def c (rand-hv)) ;; gigasquid

;H = X * A + Y * B + Z * C
(def h (mean-add (xor-mul x a) (xor-mul y b) (xor-mul z c)))

Now, we have a sum of all these things and we want to find the value of the favorite sock. We can unbind it from the sum by xor multiplying the favorite-sock hypervector x. Because of the property that xor multiplication both distributes and cancels itself out.

1
(hamming-dist (xor-mul x (xor-mul x a)) a) ;=> 0

We can compare the result with the known values and find the closest match.

1
2
3
4
5
6
7
(hamming-dist a (xor-mul x h)) ;=> 1462.0  ;; closest to "red-plaid"
(hamming-dist b (xor-mul x h)) ;=> 1721.0
(hamming-dist c (xor-mul x h)) ;=> 1736.0

(cosine-sim a (xor-mul x h)) ;=> 0.3195059768353112 ;; closest to "red-plaid"
(cosine-sim b (xor-mul x h)) ;=> 0.1989075567830733
(cosine-sim c (xor-mul x h)) ;=> 0.18705233578983288

Conclusion

We have seen that the nature of higher dimensional representation leads to some very interesting properties with both representing data and computing with it. These properties and others form the foundation of exciting advancements in Cognitive Computing like word vectors. Future posts will delve further into these interesting areas. In the meantime, I encourage you to read Kanerva’s paper on your own and to find comfort in that when you can’t find one of your socks, it’s not your fault. It most likely has something to do with the curious nature of hyperspace.

Thanks to Ross Gayler for bringing the paper to my attention and to Joe Smith for the great conversations on SDM

Book of Software Miracles

Published on:
Tags: All

"Book of Software Miracles - Cover"

This 21st century illustrated manuscript was recently uncovered. It depicts miraculous phenomena in software engineering and represents one of the most spectacular new discoveries in the field of Renaissance Computer Science.

Some examples of the glorious illustrations and text translations are presented here.

"Book of Software Miracles Fol. 26"

On the day in 1958 that John McCarthy invented LISP, three suns appeared in the East in the morning sky which moved towards each other so that they merged into one.

"Book of Software Miracles Fol. 72"

In the year 1952, countless software bugs appeared in the land after being coined by Grace Hopper during the invention of the COBOL language.

"Book of Software Miracles Fol. 83"

In the year of 2007, a great and wonderful comet appeared in the sky. This was followed by the invention of the Clojure language by Rich Hickey.

"Book of Software Miracles Fol. 91"

A month after the NPM Node repository burst its banks in the year of 2015, a wondrous creature appeared from the JavaScript ecosystem. Found dead after the raging subsided, was in this shape and form and is painted here.

For more information about this remarkable book visit Book of Software Miracles

Can a Programming Language Make You Smarter?

Published on:
Tags: All

All programming languages are not created equal. Some clearly excel at solving different problems. There are languages that are great for scalability and others that are great for proving correctness. But what about one that that will make you smarter? To look for this answer, I am going to turn to a different language, our human language, and research from the field of linguistics and cognitive development.

A new language is born

Up until the late 1970s, there was no school for the deaf In Nicaragua. Being deaf before then meant only developing crude signs with your family and friends. Beyond that there was nothing. Finally, the wife of the dictator set up a special school for them. This was the first time that the children were brought together in one place rather than being scattered about. For most of them, it was the first time that they had ever met another deaf person. There were about fifty kids in that first class and they came together with fifty different ways of communication with crude gestures. But after awhile, something amazing happened. They started to converge into a common system – a language was born. For scientists, this was an incredible opportunity to watch a new language develop. One of the fascinating insights had to do with how the this sign language evolved over the generations of children entering the school. One in particular, was a study about how words in a language can actually increase your cognitive capacity and make people smarter.

The study involved a test that was performed on different generations of students from the school, including the very first students who were now grown and younger children. A comic strip shown to the participants. In it there are two brothers. The big brother is playing with the train and the little brother wants the play with it. The big brother puts it under the bed and goes out to the kitchen to get a sandwich. While the big brother is gone, the little brother takes out the train and hides it in the toy box. The question for the test subjects is, “When the big brother comes back, where is he going to look for the train?”

Thinking about thinking

For most kids over the age of five, the answer will be that the big brother will look under the bed because he doesn’t know that it has been moved to the toy box. The interesting thing was that the first generation of kids that went through the school, now thirty five years old, failed this simple test. The younger ones all passed. What was going on?

To find out, the scientists looked a differences in the language over the generations and found that the older signers lacked words for the concept of thinking. The earlier generation just had one word for think, while the later generations had evolved ten or twelve. Words that expressed concepts like understand and believe. It seems that having words gave them access to a concept that otherwise would have been really hard to think about. It increased their cognitive capacity. It enabled them to think about thinking.

Thinking about programming programs

Programming languages allow us to translate our human thoughts and instructions to machines. We have words to express simple concepts like adding two numbers together with + or finding how many elements are in a list with count, but what about harder concepts that are more difficult to access? Is there an analogy to having words to think about thinking that some programming languages have that others don’t? I believe there is. It’s having a way to think about a program programming itself with self modifying code or macros.

In Clojure, macros are way to do meta-programming. This involves ability to treat the program as data, allowing a program to modify another program – even its own. It gives Clojure the capability to implement language features, make code more concise, and encapsulate patterns and repetitive code. It is a powerful feature and more to the point of our current line of inquiry, it is a pretty friggin difficult concept to grok. In fact, before I was exposed to a language that had macros in it, I doubt that I would have been able to think about it. For the sake of argument, let’s go back to the comic strip story with the two brothers. But this time, the train is a function and the little brother uses a macro on it when the big brother gets a snack in the kitchen. What does the big brother expect the result of the program to be when he gets back in the room. Prior to my exposure to Clojure and macros, I am sure I would have failed the test.

Conclusion

So yes, I do think that a programming language can increase your cognitive capacity and make you smarter. I certainly feel that my exposure had expanded my world and given me access to programming concepts that I did not have with my other programming experience. I also feel certain that there are other languages out there, both existing and yet to be invented, that have words to express concepts that am not even aware of. Languages that are just waiting for me to expand my horizons.

And that certainty makes me incredibly happy and grateful to be a programmer.

Listen to whole fascinating story of Ann Senghas studying sign language that born in Nicaragua on Radiolab

The Key to Moving Fast Is Feedback

Published on:
Tags: All

The last time I went to the dentist to get a cavity filled, I got a shot of Novocain as a local anesthetic to numb the area. After I emerged from the dental chair, I still had no feeling in my lip and mouth area. Trying to talk and smile was comical. Trying to drink tea resulting in dribbles down my face. Luckily, the loss of feeling only lasted a couple hours. Soon, I was back to normal. Even in this small and common scenario, we can see the close link between motion and sensory perception. What would happen if you couldn’t feel your whole body?

This is exactly what happened in the tragic and amazing story of Ian Watermann. At age 19, he had what appeared to be the flu, but was actually a rare disease of the nervous system that left him unable to feel any sensation of touch below the neck. Unable to process this sensory information, he was totally unable to walk or move his body. This is despite the fact that his motor system was left almost untouched by the disease. He had the capacity to move, but without the sensory feedback he was paralyzed.

Ian was told that he would spend the rest of his life in a wheelchair, but in an inspiring show of determination he recovered mobility. With the help of physiotherapists, he retrained his brain to substitute visual feedback for touch feedback. He could move his hands and legs while he could see them. Finally, he could walk again with his sight and concentration. If the visual feedback is taken away, (lights turned off), the ability to move vanishes.

Feedback is the essential ingredient for human motion. But what about businesses, startups, and software projects? One of the challenges that all of these face is the ability to move fast. This ability to move is not unlike that in our own bodies. Movement and action are a result of a complex, interdependent system. In the case of businesses, they are made of people, technology, and processes. Like the human body, it is critically dependent on feedback. A business can have the capacity for action, but without the ability to process feedback it is essentially immobilized.

If you want to move fast, single most important thing you can do is to improve your feedback at every level.

If you want to add new technology, ask how it improves your feedback.

  • Feedback from the market
  • Feedback from your users
  • Feedback from your operational systems
  • Feedback from your code
  • Feedback from your employees
  • Feedback from yourself

The best feedback systems are distributed and multi-layered. A single market feedback signal to the CEO is good, but much better is one that gets shared among the many facets of a company. A distributed and interconnected network of feedback will help companies and projects be responsive, fast, nimble, and resilient.

The next time you hear, “How can we go faster”?“ ask the question “How can we increase feedback?” You’ll be on the right track to fly.