World Domination With Hexapods and Clojure

Published on:

Once you have your hexapod assembled and running using the hand held controller, of course, your thoughts naturally turn to world domination.

The most powerful tool in the world is the Clojure REPL

World domination requires the most powerful tools available. That of course calls for Clojure and the Clojure REPL. I recommend Emacs as the editor of choice of such an endeavor. However, it if you are content with city, state, or single country domination, other editors that support Clojure are also fine.

Connect the XBee to your computer

First, we need to get the computer to talk to the hexapod wirelessly. We can do this with a USB to Serial adapter that uses the paired XBee from the handheld commander.

Take the XBee from the handheld commander

and move it to the USB to serial adapter

Now plug the usb into your computer.

Get your Clojure ready

In your clojure project, the only magic you need is the Serial Port library. Import the library and list your serial ports. Then open the one that shows up for you.

1
2
3
4
5
6
7
8
9
(ns clj-hexapod.core
  (require [serial-port :as serial]))

;; Use this command to see what port your serial port
;; is assinged to
(serial/list-ports)

;; replace the USB0 with whater it shows
(def port (serial/open "/dev/ttyUSB0" 38400))

Since we are going to be talking to the hexapod. We need to send the commands in the same format that it is expecting. Basically, a packet of where the postions of the joystick are, as well as what buttons are pushed.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
(defn checksum [v]
  (mod (- 255 (reduce + v)) 256))

(defn vec->bytes [v]
  (byte-array (map #(-> % (Integer.) (.byteValue) (byte)) v)))

(defn build-packet [r-vert r-horz l-vert l-horz buttons]
  [255 ;header
   r-vert
   r-horz
   l-vert
   l-horz
   buttons
   0
   (checksum [r-vert r-horz l-vert l-horz buttons])])

(defn send [packet]
  (serial/write port (vec->bytes packet)))

From here, we can simply make functions for the joystick controls to go up and down

1
2
3
4
5
6
7
8
9
10
11
12
13
;;values between 129-254
(defn up [speed]
  "joystick up for speed between 1-100"
  (if (good-range? speed)
    (int (+ 129 (* 125 (/ speed 100.0))))
    CENTER))

;;values between 0 and 125
(defn down [speed]
  "joystick down speed between 1-100"
  (if (good-range? speed)
    (int (- 125 (* 125 (/ speed 100.0))))
    CENTER))

Then we can do things like walk, turn, and change the gait

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
(defn walk-forward [speed]
  "walk forward speed between 1-100"
  (send (build-packet CENTER CENTER (up speed) CENTER 0)))

(defn walk-backwards [speed]
  "walk backwards speed between 1-100"
  (send (build-packet CENTER CENTER (down speed) CENTER 0)))

(defn walk-right [speed]
  "walk right speed between 1-100"
  (send (build-packet CENTER CENTER CENTER (up speed) 0)))

(defn walk-left [speed]
  "walk right speed between 1-100"
  (send (build-packet CENTER CENTER CENTER (down speed) 0)))

(defn turn-right [speed]
  "turn right speed between 1-100"
  (send (build-packet CENTER (up speed) CENTER CENTER 0)))

(defn turn-left [speed]
  "turn left speed between 1-100"
  (send (build-packet CENTER (down speed) CENTER CENTER 0)))

(defn change-gait [gait-key]
  (let [gait-num (gait-key gaits)]
    (send (build-packet CENTER CENTER CENTER CENTER gait-num))))

(defn stop []
  "stop hexapod"
  (send (build-packet CENTER CENTER CENTER CENTER 0)))

You can control it from the REPL with some simple commands

1
2
3
4
5
6
7
8
9
10
11
(walk-forward 20)
(walk-backwards 10)
(walk-right 10)
(walk-left 10)
(turn-right 10)
(turn-left 10)
(change-gait :ripple-smooth)
(change-gait :tripod-normal)
(change-gait :ripple)
(change-gait :amble)
(stop)

If you want to see the code, it is out on github as clj-hexapod. Please keep in mind that it is early days still, and I am still just exploring.

Phoneix Code Firmware

It is worth noting the the above code was meant to run with the default hexapod firmware. That is the “Nuke” firmware. There is another firmware, the Phoenix code, that gives the hexapod more lifelike moves and allows it to twist and shift is rather creepy ways.

I just loaded it on the hexapod yesterday. The commander software changed too, so I will of course need to revisit the code, to add in the new moves. But here is a sneak preview of what it can do:

That is my daughter singing in the background

That’s all for now

I hope I have given you pointers for getting started on your own world domination with Clojure and Hexapods. Remember to practice your laugh …. Muhahaha :)

Walking With Hexapods

Published on:
Tags: All, Robots

Here we see the PhantomX Hexapod thriving in the natural habitat of a cozy, climate controlled, modern house. But there was a time before the hexapod. In particular, there was a time of many hexapod parts and a high level software developer that somehow, despite her natural lack of mechanical skills, managed to bring it to life. This blog post endeavors to chronicle the high and low points of this journey. And perhaps, will make it easier for any other brave souls that would like to bring the Age of Hexapods into their homes.

Oh My! So Many Parts

I wasn’t really mentally prepared for the vast amounts of parts in the kit. Here is a sampling:

  • 18 AX-12A Servos
  • Top/ Bottom Body Plate
  • 20 Brackets
  • Arbotix Board
  • 2 Xbees
  • Lipo Battery & Charger
  • Arbotix programmer
  • 19 Cables
  • 50,000 nuts and screws (Really only about 850 – but you get my point)

First Things First

The very first thing to do is to make sure that you have all the parts. Once I went through the checklist and double counted all my screws, I was relieved to go onto the next task of programming the Arbotix and assign ids to servos and center them. These steps consisted of:

  • Getting the Arduino IDE going
  • Loading the Drivers and Libraries
  • Loading the ROS (Robot Operating System) on the Arbotix Board, so that it could be used to program the servos.

Each of the servos have to be assigned a number. This will let the program know which part of the leg is which, so that it will eventually – hopefully, be able to walk. Once the id is given, a sticker is labeled on the servo for future use. Centering the servos is a VERY important step not to overlook. If you do not center the servos, you will get into the unfortunate circumstance of having to dissemble the robot, cry, recenter the servos, and then reassemble the robot. Please avoid.

Putting It Together

The assembly starts with the feet and legs first. I was so pleased when I got the feet assembled, that I considered knitting little baby hexapod booties.

Next a servo and the tibia is added

Another servo and the tibia and femur is assembled

Finally, another servo and the whole leg is assembled

Newbie Advice

I would like to pause for a minute to share some advice from my trial and errors in assembly thus far:

  • Don’t overtighten screws – More is not better. It makes things like plexiglass frames crack and break.
  • Seating nuts in servos is hard – This isn’t really advice. Just moral support in your struggle. There are 18 servos and up to 20 nuts to seat in each servo.

Assembling the body

The body is where the board, battery and and cables go.

At long last, the legs can be attached to the body – with 120 screws of course.

Round two of Newbie Advice

  • For those who have never stripped wires and attached them to power supplies, like me, please mind that the wires are twisted so that the edges don’t fray out and short out everything requiring you to re-assign all the servos that lost their ids and much unscrewing, crying, and reassembling.
  • When programming the Arbotix board. You must remove the Xbee, or it will not work.
  • Also, did I mention not over-tightening screws? Also, the order in which you tighten the screws is important too. Try to tighten them all loosely in order, so you don’t stress the fiberglass parts and have something like this happen.

It is Alive!

Finally, the moment of truth. The hexapod is assembled and it is time to upload a test check on the board to make sure that everything is working alright.

It lives - the first test check of the PhantomX Hexapod

Walking with Hexapods

The kit comes with a commander that you assemble of course. You can use it to control the hexapod with hand-held joysticks.

The moment of truth, when it finally took its very first steps, and the Age of Hexapods began.

First cautious steps of running the PhantomX Hexapod with wireless controller

Stay tuned for the next post of how to control the hexapod with your Clojure code and loading the Phoenix firmware that gives it a life-like moves

Remembering Jim

Published on:
Tags:

You don’t really understand how important someone is in your life until they are suddenly gone. I have had the honor and privilege of working, playing, and laughing alongside Jim Weirich for the last few years. He was an amazing man. I miss him dearly.

Think

Jim taught us how to think about computer programming. I once had a Physics professor tell me not to worry so much about the formulas and math. The most important thing was how to think. Everything thing after that would naturally fall into place. Jim embodied that philosophy for programming. The languages and algorithms poured almost effortlessly from his masterful fingers. He knew how to think about the problem, observe from a multitude of angles. Finally, bringing his experience, creativity, and humility to bear on it, he would shape it into a beautiful piece of code.

Make

Jim showed us how to make. He was a master craftsman and a maker. The care and joy that infused his work was inspiring. He loved the process of Test Driven Development. Green tests were always a celebration. The surprise of beautiful code emerging from a refactoring was treated as a gift. He is best known for his Rake build tool, but his testing library rspec-given is one that reminds me most of him and the way that he loved to craft code.

Care

Jim showed us how to care. Jim cared deeply about each and every person. While flying his drone in the office hallway, he would wave down a passing building maintenance worker and ask if they wanted to fly it. Over the course of the next few minutes, Jim would put them completely at ease and chat happily with them. He was like that to everyone. In the few days after his passing, many building workers, and people from other offices, that I only ever nodded at in passing, stopped by to give their sincere condolences his loss. He is without a doubt, the kindest person I have ever known. He took great joy in his faith and in his family. He would talk about his family all the time and how much they enjoyed each others company. He is without a doubt, one of the personally richest men I have ever known.

Share

Jim taught us how to share. Jim wanted to share his knowledge. He was a great teacher and presenter. He gave engaging presentations that took people on a journey with him, not only imparting knowledge, but becoming friends with him in the process. He was a pillar in the local Cincinnati technical community. He is the reason why myself and countless others were drawn to Ruby and the Ruby community.

Dream

Jim dreamed with us. He was a creative. He was also a singer, song writer, musician, and artist. He brought that creative spirit, curiosity, and love of learning to the technical world. I will cherish our lunches spent together flying our AR Drones, sometimes crashing them into walls and each other, while trying to find creative ways of controlling them with code. He was just lately exploring with the micro-quadcopters like the Proto-X. We had plans to make all our Spheros, Roombas, big drones, and little drones dance to live coded music. We were both auditing on Autonomous Mobile Robots to see what we could learn to help us with our robot dreams.

I miss him dearly. I will cherish my memories of him and I am so grateful for all the ways he has enriched my life. I will remember that when I dream in code, he is still there with me.

Until that day when we will fly our friendly robots together again.

Hitchhiker’s Guide to Clojure - Part 3

Published on:

Amy and Frank fled down the stairs from her office and met an unexpected obstacle to their exit, a locked door. As they peered out the window, they saw yesterday’s Amy pull up in the parking space, get out, retrieve her laptop, and start to head in the front door.

“Oh good, we can take your car”, said Frank.

Amy took a second to recover from the shock of seeing what her hair really looked like from behind and then asked, “But, how can we get to it? The door is locked, and we can’t go back up to the office… I would meet myself.”

Frank smiled, pulled out the Hitchhiker’s Guide to Clojure and pulled up a page with the heading Locked Doors and Other Small Bothers.

One of the reasons for the surprising success of The Hitchhiker’s Guide to Clojure is helpful advice of on an assortment of practical matters.

Locked doors are a common nuisance in modern times. Fortunately, Clojure provides a very handy function for such occasions, fnil. This commonly overlooked function, takes an existing function and returns a new function that allows you to specify a default for a nil parameter. For example, take this locked door:

1
2
3
4
5
(defn locked-door [key]
        (if key "open" "nope - staying shut"))

(locked-door :key) ;=> "open"
(locked-door nil) ;=> "nope - staying shut"

In this case, the simple application of fnil will help remove this pesky obstacle.

1
2
3
4
(def this-door (fnil locked-door :another-key-that-works))

(this-door :key) ;=> "open"
(this-door nil) ;=> open

Please be advised, that some doors are locked for a good reason. It is left to the user’s discretion. But it is highly recommended in Norway’s moose regions, to think twice.

They unlocked the door and headed for Amy’s car. She couldn’t decide whether she was surprised or not to find her keys in her pocket, so she gave up and just got in instead. After a short drive, they arrived at the zoo and navigated their way through various school groups and arrive at the Aquarium.

Amy at this point, having prided herself on her adaptable nature, was still having trouble processing the latest events. She had discovered that Frank was a Datomic time traveller, the world was made of Clojure, and it was also about to be destroyed in a short future that she just came from. Her rational brain, (which was currently working way too hard), was quite relieved to be distracted by the sight of two really adorable otters. They were floating contentedly around the pool, occasionally stopping to crack an Abalone shell on their fuzzy tummies.

Her rational brain, after having a nice breather, finally re-asserted itself and made Amy ask Frank:

“Why are we here?”

“Otters are the front-line Chrono-guards, of course.”

He went on to explain that otters are tasked with the important job of keeping a close watch on human civilization and making critical, minor adjustments to keep things on an even track. All those nature videos of otters cracking shells with rocks? They are really evaluating Clojure expressions crucial to our way of life. Most of the time, they prefer to do their work remote. They find floating on their backs in the peaceful waters the most productive work environment. However, sometimes they will construct zoos or aquariums, when their work requires them to keep a closer watch on some areas. In times of great need, they might even take a human form for a short while. Recently, one of their agents was inadvertently exposed and required a few extra Abalone shells to straighten out.

Frank opened his pack and handed his evaluator to Amy to hold while fished out four mini-marshmallows. He gave two to Amy and then proceeded to put one in his ear and the other in his mouth. More remarkably still, he appeared to be speaking with the otters.

Mini-marshmallows are the best way to create portable Clojure core.async channels that won’t melt in your hands.

To construct a channel simply use chan

1
(def talk-to-otters-chan (chan))

Channels by default are unbuffered, which keeps them at the mini-marshmallow size. It requires a rendezvous of a channel producer and consumer to communicate. In the case of otters, someone to talk to the otters and the otters, themselves, to listen. Be advised that with a regular blocking put >!!, the main thread will be blocked. That is, if you try to speak to the otter, you will be stuck there until it gets around to listening. This isn’t the best case for the talker if the otter was busy, so one approach would be to use a future to talk to the otter with a blocking put >!!.

1
2
(future (>!! talk-to-otters-chan "Hello otters.")) ;=>#<Future@3c371c41: :pending>
(<!! talk-to-otters-chan) ;=> "Hello otters."

One could also use a buffered channel, but that increases the size of the marshmallow.

1
2
3
4
5
6
(def talk-to-otters-chan (chan 10)) ;;create channel with buffer size 10
(>!! talk-to-otters-chan "Hello otters.") ;=> nil
(>!! talk-to-otters-chan "Do you know anything about the world ending?") ;=> nil

(<!! talk-to-otters-chan) ;=> "Hello otters."
(<!! talk-to-otters-chan) ;=> "Do you know anything about the world ending?"

The best way to conserve space and time is to use asynchronous communication with go blocks that wont’ block the threads. Inside these go blocks one can use regular non-blocking puts >! and gets <!.

1
2
3
4
5
6
7
8
9
10
11
(def talk-to-otters-chan (chan))
(go (while true
      (println (<! talk-to-otters-chan))))
(>!! talk-to-otters-chan "Hello otters")
(>!! talk-to-otters-chan "Do you know anything about the world ending?")
(>!! talk-to-otters-chan "Also, you are really fuzzy and cute.")

;; (This prints out in the REPL as you talk to the otters)
Hello otters
Do you know anything about the world ending?
Also, you are really fuzzy and cute.

This compact, lightweight, and asynchronous method of communication is well suited to conversations and messaging of all sorts, including conversing with human, animals, and other Clojure-based life forms.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
(def talk-chan (chan))
(def listen-chan (chan))
(go (while true
      (println (<! listen-chan))))
(go (while true
      (>! listen-chan
          (str "You said: "(<! talk-chan)
                " " "Do you have any Abalone?" ))))
(>!! talk-chan "Hello otters")
(>!! talk-chan "Do you know anything about the world ending?")
(>!! talk-chan "Also, you are really fuzzy and cute.")

;; (This prints out in the REPL as you talk to the otters)
You said: Hello otters Do you have any Abalone?
You said: Do you know anything about the world ending? Do you have any Abalone?
You said: Also, you are really fuzzy and cute. Do you have any Abalone?

Amy put one of the mini-marshmallows in her ear. She immediately began to hear the conversation that Frank was having with the otters.

“But who would want to destroy the entire world? That is really kinda over-board.”

“I don’t really know, but there was someone on Galactic Hacker News the other day that was quite tiffed over the idea that Clojure was considered a Lisp.”

Amy reached to put the other marshmallow in her mouth to ask a very important question. But unfortunately, as she moved her hand, she accidentally pushed the big red Source button on the evaluator. Suddenly, she and Frank were swept up in a vortex that spun them around and sucked them down into the ground.

Hitchhiker’s Guide to Clojure - Part 2

Published on:

Amy and Frank were hurtled quite rapidly through time and space after attaching themselves to a transaction headed through the Datomic Transactor. From there things slowed down a bit, then took a sharp left and ricocheted off again with incredible speed until they landed in another Datomic Peer, and finally appeared in the same room. Amy was quite startled by the anti-climatic nature of the whole dematerializing and rematerializing in the same exact spot, and didn’t really know what to do next. She surveyed her office and found it exactly the same, except for two distinct details. For one, the pistachio shells had disappeared, and for another, the date on the computer showed yesterday at 8:00 am. She tried to connect these facts rationally with the pistachios in her pocket and finally said,

“I am about to come into work.”

Frank, who was busily hunting through his blue zippered pack around his waist, looked up briefly.

“Well, we better get out of here then, I only have a blue fanny pack.”

The Hitchhiker’s Guide to Clojure explains that the “fanny pack”, or “bum bag”, is the symbol of a licensed Chrono-agent. The rank of the Chrono-agent can be clearly determined by its color on the ROYGBIV scale.

The origins of this licensing method can be traced to an embarrassing incident in human history known as “The Great Flood”. A junior Chrono-agent was trying to increase the yield of a tomato crop during a dry spell and was trying to do the following recursive function in his evaluator:

1
2
3
4
5
6
7
8
9
10
11
12
(defn rain [days]
  (when (pos? days)
    (println (str "Rain: " days))
    (rain (dec days))))

(rain 5)
;;Output
;;  Rain: 5
;;  Rain: 4
;;  Rain: 3
;;  Rain: 2
;;  Rain: 1

Unfortunately, he made the rookie mistake of forgetting to decrement the days before passing it to the recursive function.

1
(dec 5) ;=> 4

The result of which was severely overwatered tomatoes.

1
2
3
4
5
6
7
8
9
10
11
12
(defn rain [days]
  (when (pos? days)
    (println (str "Rain: " days))
    (rain days)))

(rain 5)
;;  Rain: 5
;;  Rain: 5
;;  Rain: 5
;;  Rain: 5
;;  Rain: 5
;;  ...(you get the idea)

It is interesting to note that he could he written the same function with a recur instead.

1
2
3
4
5
6
7
8
9
10
11
12
(defn rain [days]
  (when (pos? days)
    (println (str "Rain: " days))
    (recur days)))

(rain 5)
;;Output
;;  Rain: 5
;;  Rain: 5
;;  Rain: 5
;;  Rain: 5
;;  Rain: 5

That would have had the nice effect of not consuming the stack, (which is fabulous for constructing those lovely fibonacci sea shells for beach vacations), but without decrementing the parameter in the recursive call, it wouldn’t have really helped.

A senior Chrono-agent was dispatched to sort out the mess. By the time he got there and stopped the rain, there was not much left to work with. Thankfully, he was quite versed in lazy and infinite aspects of Clojure. For instance, take this vector of 2 chickens:

1
[:hen :rooster]

It can be transformed into an infinite lazy list of chickens by using cycle.

1
(cycle [:hen :rooster])

What really set the senior Chrono-agent apart from his junior colleague, was that he did not put the infinite sequence in the evaluator. If he had, there would have been another embarrassing incident in human history, this time involving an over-abundance of poultry. Instead, he used take to get the first n infinite chickens.

1
2
3
4
5
(take 5 (cycle [:hen :rooster]))
;;=> (:hen :rooster :hen :rooster :hen)
(take 10 (cycle [:hen :rooster]))
;;=> (:hen :rooster :hen :rooster :hen :rooster :hen :rooster :hen
:rooster)

After that, the council of Chrono-agents, decided to license evaluator use. Low-level recursion requires the 2nd highest indigo level rank. The highest violet rank, of course, belonging only to the Macro Masters. All lower levels are required to stick to the safer, higher level abstractions like for, map, or reduce.

Amy was still watching Frank busily rumaging through his pack in the office . Finally he found what he was looking for, his hand emerging triumphantly with a fistful of mini-marshmallows.

“Got it. Come on, let’s go! Someone is trying to destroy the world and we need to see the otters.”

Hitchhiker’s Guide to Clojure

Published on:

The following is a cautionary example of the unpredictable combination of Clojure, a marathon viewing of the BBC’s series “The Hitchhiker’s Guide to the Galaxy”, and a questionable amount of cheese.

There have been many tourism guides to the Clojure programming language. Some that easily come to mind for their intellectual erudition and prose are “The Joy of Touring Clojure”, “Touring Clojure”, “Clojure Touring”, and the newest edition of “Touring Clojure Touring”. However, none has surpassed the wild popularity of “The Hitchhiker’s Guide to Clojure”. It has sold over 500 million copies and has been on the “BigInt’s Board of Programming Language Tourism” for the past 15 years. While, arguably, it lacked the in-depth coverage of the other guides, it made up for it in useful practical tips, such as what to do if you find a nil in your pistachio. Most of all, the cover had the following words printed in very large letters: Don’t Worry About the Parens.

To tell the story of the book, it is best to tell the story of two people whose lives were affected by it: Amy Denn, one of the last remaining Pascal developers in Cincinnati, and Frank Pecan, a time traveler, guidebook reseacher, and friend of Amy.

Amy, at this moment, was completely unaware of the chronological advantages of her friend, being preoccupied with the stark fact that she was about to be fired. She had been given a direct order from her CEO to deploy the code at 3:05pm. It was now 3:00pm and she had realized that if she did so, all the data painstaking collected about the effects of Throat Singing on the growth rate of tomatoes would be erased. Unfortunately, the CEO did not really understand or trust anything having to do with technology or programming. In truth, the only two things that he seemed to care about were tomatoes and checklists of unreasonable things. The fact that no course of action available to her in the next 5 minutes would help her employment situation, agitated Amy so much that she was violently shelling and eating pistachio nuts.

The “Hitchhiker’s Guide to Clojure” says that pistachios are Nature’s most perfect s-expression. An s-expression is recursively composed of s-expressions or an atom. In the case of the humble pistachio, the atom is the nut inside. The atom simply evaluates to itself. This is best seen is an example where the following expressions are evaluated in the Clojure REPL

1
2
3
4
"hi" ;;=> "hi"
1 ;;=> 1
true ;;=> true
nil ;;=> nil

`

Which leads to the very practical tip of what to do if you find a nil in your pistachio. The answer, of course, is to be thankful that you have a value that represents the absence of a value – and to get another pistachio.

In Clojure, a s-expression is written with parens. The first element within the parens is an operator or function and the rest of the elements are treated as data, some of which can be s-expression themselves.

1
2
(+ 1 2) ;;=> 3
(+ 1 (+ 2 2)) ;;=> 5

Considering the pistachio again, we can think of the nut in the shell as an s-expression, (providing we also imagine an operator or function right in front of the nut).

Here we define a function that will turn the nut red, by appending the string “red” to the nut-name.

1
2
3
4
(defn red [nut]
  (str "red " nut))

(red "nut1") ;;=> "red nut1"

Notice that if we put a quote in front of the expression, it will no longer be evaluated.

1
'(red "nut1") ;;=> (red "nut1")

quoting the expression turns it into a list, which we can then manipulate with other s-expressions (code as data).

1
2
3
(first '(red "nut1")) ;;=> red

(last '(red "nut1")) ;;=> "nut1"

If we try to evaluate the s-expression with just the nut name in the parens, we get an error because there is no function in the first slot.

1
2
("nut1")
;;=> ClassCastException java.lang.String cannot be cast to clojure.lang.IFn

The whole thing of having to have a function in front of the nut in the pistachio has invited much heated debate on the suitability of pistachios being held up as the paragon of an s-expression. But critics have failed to explain the corroborating evidence of red pistachio nuts, or find a more suitable nut.

Amy’s time traveling friend, Frank, is due to appear on the scene momentarily to reveal that the whole world is really made of Clojure Datomic datoms. Furthermore, a transaction is going to be evaluated soon, which will retract all the facts on EVERYTHING. The practical effect of this will be that nothing will have an attributes. A world without any attributes at all would be quite boring and, for all purposes, be non-existent. Luckily for Amy, Frank is a Datomic Time Traveller and has a hand-held “evaluator” which will save them. Also luckily, the readers will be spared dialog, since the author can never figure out where to put the punctuation and is really rubbish at it. Only one phrase will be illustrated. This is the rather important one, having been uttered by Amy after it was explained to her that she, and the entire world around her, was entirely composed of Clojure:

“Isn’t that the language with a lot of parens?”

To which, Frank handed her the “Hitchhiker’s Guide to Clojure” and pointed to the words on the front cover, “Don’t Worry About the Parens.”, and turned to the first page.

“There is absolutely no need to worry about the parens. It is known today that the first really important discovery of humankind was not fire, but Paredit. Paredit mode magically acts to insert and balance the right parens to the point where they actually can no longer be seen. This is evident by just looking around you. The world is made of Clojure and there are millions, billions, and trillions of parens all around you and your tea cup right now. Yet, you don’t see them. Paredit mode.”

At the urging of Frank, Amy quickly stuffed the remaining pistachios in her pockets while he readied his evaluator. The display showed some large integer value, that decreased as he pushed the buttons on the console. Finally, he pushed the large red button and two parens started glowing on either side of them … and they disappeared.

Lean Customer Interview Tips for the Introverted Developer

Published on:
Tags: All, Lean

After attending a local Lean Startup Circle meetup, I decided to write about some of my experiences with the Lean Startup Methodology from a software developer’s point of view.

Why should you care about the Lean Startup Methodology?

As software developer, I put my passion, honed expertise, and time into crafting a digital product or service. One of the worst things that can happen is that when it is released, no one uses it or wants it. You can build a absolutely beautiful software product that scales to the nines. But if you build the wrong thing, it is a failure.

The Lean Startup Methodology is basically a scientific approach to developing business and products. You analyze your assumptions and then devise experiments to test your hypotheses. One of the ways that you can test them, is by talking to people and doing customer interviews.

Talking to Random People is Terrifying

The prospect of talking to random people on the street is a terrifying prospect for me as a semi-introverted software developer. It is also incredibly useful to get out of the office and actually get feedback. These tips come from a Lean Startup Weekend in Columbus at the Neo office, where I successfully got out of my comfort zone and engaged in customer interviews.

Background: Our team was designing experiments around creating an app for Food Trucks. The fundamental assumption that we wanted to validate was – “People will pay money for a phone app that will tell them where all the Food Trucks are.” So we headed downtown to the local food market. This was a place where there were local artisan food vendors in a food hall. It seemed like an ideal place to find people interested in good food and Food Trucks.

Tip #1 – You will suck at first, but it gets better

The first few people I tried to talk were complete failures. I felt like a complete idiot. Do not get discouraged. It helps if you go with someone else for moral support, although you should interview people by yourself, so they don’t feel intimidated.

Tip #2 – Have your questions written down

Come prepared with the questions that you want to ask people, so you don’t have a brain freeze with nervousness. However, I found I got people to talk to me more if I didn’t carry the pad of paper with me. Basically anything you can do to look less like a marketer helps.

Tip #3 – Tell them what you are trying to build first

DO NOT START OUT LIKE THIS: “Can I ask you a few questions?” This never worked. Again, this is what a marketer would say. I got my best responses by telling people that I was a software developer looking to build a app for Food Trucks. In most cases, they were happy to give advice on whether they would use the app and how much they would pay for it.

Tip #4 – Write down your results right away

Memory is a fleeting thing. Try to record the results of your conversation right away. Take the notepad from your pocket and go to a corner or table and note everything down, before you forgot it all. Also try to write down what the person said, not just your interpretation. If someone is helping you interview, you can have one person be a scribe, while one person talks.

Tip #5 – Give Gifts

If you have any funding available for this endeavor, you can get a stack of Amazon $10 gift cards for people’s time. This was some advice given to us. I didn’t actually try it for this particular outing, but I have heard that others have used it very successfully.

Getting out of your Comfort Zone is Scary but Rewarding

I certainly got out of my comfort zone as a developer that weekend. But the end result was worth it. We ended up disproving our hypothesis the people would pay for our app. Almost all the people we interviewed said that they would download the app, but no one was willing to pay for it. We invalidated a core assumption, it was a success.

We could move on to testing and validating another idea that could be a viable business product.

Build things that matter. Build well. Build the right things.

Thanks to Scott Burwinkel for helping review this post for me – you rock

Guide to Leaving Your Mac Laptop

Published on:
Tags: all

I felt like I was in a controlling relationship headed downhill. After two custom laptops returned for defective hardware, I wanted to leave. But leaving didn’t seem so easy after living in the walled garden of Apple all those years.

This blog post is about how to leave your Mac and return to OSS.

Make a New Plan, Stan

There are quite a few nice alternatives to the Mac Air out there. I decided to go with the new Sputnik 3. Some of my reasons:

  • Powerful – New Haswell processor
  • 13.3 inch touch display with 1920 x 1080 resolution
  • Ships with Ubuntu 12.04 (64 bit)
  • Nice design (yes looks are important)

It arrived a couple of days before Christmas. The packaging itself was quite nice. Here is a picture next to my 13 inch Mac Air.

The best was that everything just “worked” out of the box. I had no problems configuring Ubuntu and getting the wireless network hooked up. I could close the lid and reopen it and have “instant on” just like the Mac Air. The keyboard is enjoyable to use and nicely backlit. The sleek design and light weight of the laptop is very comparable to the Mac Air.

Hop on the Bus, Gus

It took my about a day to set up all my programs that I use on a daily basis. Here is a overview:

Application Dock/ Organization – Dash

Ubuntu has a dock on the left hand side of the screen, that is very similar to the mac one. You can right click and pin applications to the dock to keep them in there. Clicking into the dash option, you can browse your applications that are installed.

Getting New Apps – Ubuntu Software Center or apt-get

You can install new applications easily by using the Ubuntu Software Center. Browsing the applications and installing them is point and click easy. If you don’t see the one you need or need a more recent version, you can always install via the command line with

sudo apt-get install package-name

Browser – Firefox or Chromium

Ubuntu comes with Firefox and Chromium installed. You can also go with Chrome of course.

Mail – Thunderbird

Ubuntu comes with Thunderbird mail ready to go. I was pleasantly surprised by how easy it was to setup Thunderbird Mail. You simply put in your email and password. Ubuntu keeps a configuration list of commonly used email providers. It automagically figured out the correct domains and ports to use. On the downside, it doesn’t do anything magic with your contacts. So you are on your own there. I also just found out about Geary, which looks pretty sweet.

Password Management – 1 Password Anywhere + Dropbox / LastPass

There is not a linux client for 1Password. I can still use it by using 1PasswordAnywhere. I just have a bookmark to the 1PasswordAnyway link and I haz my logins. I am switching over to LastPass though, so you can edit / add new passwords. There is also an import utility to move stuff over from 1Password.

Emacs

Emacs just works :) It might be just me, but I think it is happier back on Ubuntu. I did an apt-get to get the 24 version.

Git Client

I went with gitg for a graphical Git client. It seems to have all the things you need.

Terminal – Byobu

Byobu Terminal comes already installed in Ubuntu. I have been taking it for a test drive and really like some of the features of easily adding new tabs, splitting screens and re-attaching to sessions.

Evernote/ Everpad

With Everpad, I can still use all my evernote stuff too.

Presentations – LibreOffice / Reveal.js

I have used Keynote heavily on the Mac. For existing presentations, I can convert them to ppt format and then modify or run in LibreOffice Impress. Most likely with all my new presentations, I will just use a JavaScript framework like reveal.js

Communication – Hipchat / Skype/ Google Hangouts / Campfire

We use Hipchat for messaging at work. Hipchat has a linux client that works just the same. Skype also has a linux client. Of course, Google Hangouts is just fine on the web. I also use Campfire sometimes. There are a couple of linux clients out there, but I haven’t tried them yet. The web version works fine for me right now.

iPhone

On my mac, I used to plug in my phone and sync to my dropbox. I tried plugging in my phone, but unfortunately, iOS7 put in a security feature to that prevents having the phone connect properly. The solution for me is to just use the phone dropbox app to sync the pictures automatically to my Dropbox.

Get Yourself Free

I don’t expect the road to free of bumps. I have only been using my new laptop for a week. But so far, it has been an enjoyable switch. The hardware is really impressive, and it feels good getting back to OSS.

Best of all, I set myself free.

Neural Networks in Clojure With core.matrix

Published on:

After having spent some time recently looking at top-down AI, I thought I would spend some time looking at bottom’s up AI, machine learning and neural networks.

I was pleasantly introduced to @mikea’s core.matrix at Clojure Conj this year and wanted to try making my own neural network using the library. The purpose of this blog is to share my learnings along the way.

What is a neural network?

A neural network is an approach to machine learning that involves simulating, (in an idealized way), the way our brains work on a biological level. There are three layers to neural network: the input layer, the hidden layers, and the output layer. Each layer consists of neurons that have a value. In each layer, each neuron is connected to the neuron in the next layer by a connection strength. To get data into the neural network, you assign values to the input layer, (values between 0 and 1). These values are then “fed forward” to the hidden layer neurons though an algorithm that relies on the input values and the connection strengths. The values are finally “fed forward” in a similar fashion to the output layer. The “learning” portion of the neural network comes from “training” the network data. The training data consists of a collection of associated input values and target values. The training process at a high level looks like this:

  • Feed forward input values to get the output values
  • How far off are the output values from the target values?
  • Calculate the error values and adjust the strengths of the network
  • Repeat until you think it has “learned” enough, that is when you feed the input values in the result of the output values are close enough to the target you are looking for

The beauty of this system is that the neural network, (given the right configuration and the right training), can approximate any function – just by exposing it to data.

Start Small

I wanted to start with a very small network so that I could understand the algorithms and actually do the maths for the tests along the way. The network configuration I chose is one with 1 hidden layer. The input layer has 2 neurons, the hidden layer has 3 neurons and the output layer has 2 neurons.

1
2
3
4
5
6
7
8
9
10
;;Neurons
;;  Input Hidden  Output
;;  A     1       C
;;  B     2       D
;;        3


;; Connection Strengths
;; Input to Hidden => [[A1 A2 A3] [B1 B2 B3]]
;; Hidden to Output => [[1C 1D] [2C 2D] [3C 3D]]

In this example we have:

  • Input Neurons: neuronA neuronB
  • Hidden Neurons: neuron1 neuron2 neuron3
  • Output Neurons: neuronC neuronD
  • Connections between the Input and Hidden Layers
    • neuronA-neuron1
    • neuronA-neuron2
    • neuronA-neuron3
    • neuronB-neuron1
    • neuronB-neuron2
    • neuronB-neuron3
  • Connections betwen the Hidden and Output Layers
    • neuron1-nerounC
    • neuron1-nerounD
    • neuron2-nerounC
    • neuron2-nerounD
    • neuron3-nerounC
    • neuron3-nerounD

To give us a concrete example to work with, let’s actually assign all our neurons and connection strengths to some real values.

1
2
3
4
5
6
7
(def input-neurons [1 0])
(def input-hidden-strengths [ [0.12 0.2 0.13]
                              [0.01 0.02 0.03]])
(def hidden-neurons [0 0 0])
(def hidden-output-strengths [[0.15 0.16]
                              [0.02 0.03]
                              [0.01 0.02]])

Feed Forward

Alright, we have values in the input neuron layer, let’s feed them forward through the network. The new value of neuron in the hidden layer is the sum of all the inputs of its connections multiplied by the connection strength. The neuron can also have its own threshold, (meaning you would subtract the threshold from the sum of inputs), but to keep things a simple as possible in this example, the threshold is 0 – so we will ignore it. The sum is then feed into an activation function, that has an output in the range of -1 to 1. The activation function is the tanh function. We will also need the derivative of the tanh function a little later when we are calculating errors, so we will define both here.

1
2
3
4
5
6
7
8
(def activation-fn (fn [x] (Math/tanh x)))
(def dactivation-fn (fn [y] (- 1.0 (* y y))))

(defn layer-activation [inputs strengths]
  "forward propagate the input of a layer"
  (mapv activation-fn
      (mapv #(reduce + %)
       (* inputs (transpose strengths)))))

Note how nice core.matrix works on multipling vectors <3.

So now if we calculate the hidden neuron values from the input [1 0], we get:

1
2
(layer-activation input-neurons input-hidden-strengths)
;=>  [0.11942729853438588 0.197375320224904 0.12927258360605834]

Let’s just remember those hidden neuron values for our next step

1
2
(def new-hidden-neurons
  (layer-activation input-neurons input-hidden-strengths))

Now we do the same thing to calculate the output values

1
2
3
4
5
(layer-activation new-hidden-neurons hidden-output-strengths)
;=>  [0.02315019005321053 0.027608061500083565]

(def new-output-neurons
  (layer-activation new-hidden-neurons hidden-output-strengths))

Alright! We have our answer [0.02315019005321053 0.027608061500083565]. Notice that the values are pretty much the same. This is because we haven’t trained our network to do anything yet.

Backwards Propagation

To train our network, we have to let it know what the answer,(or target), should be, so we can calculate the errors and finally update our connection strengths. For this simple example, let’s just inverse the data – so given an input of [1 0] should give us an output of [0 1].

1
(def targets [0 1])

`

Calculate the errors of the output layer

The first errors that we need to calculate are the ones for the output layer. This is found by subtracting the target value form the actual value and then multiplying by the gradient/ derivative of the activation function

1
2
3
4
5
6
7
(defn output-deltas [targets outputs]
  "measures the delta errors for the output layer (Desired value – actual value) and multiplying it by the gradient of the activation function"
  (* (mapv dactivation-fn outputs)
     (- targets outputs)))

(output-deltas targets new-output-neurons)
;=> [-0.023137783141771645 0.9716507764442904]

`

Great let’s remember this output deltas for later

1
(def odeltas (output-deltas targets new-output-neurons))

Calculate the errors of the hidden layer

The errors of the hidden layer are based off the deltas that we just found from the output layer. In fact, for each hidden neuron, the error delta is the gradient of the activation function multiplied by the weighted sum of the ouput deltas of connected ouput neurons and it’s connection strength. This should remind you of the forward propagation of the inputs – but this time we are doing it backwards with the error deltas.

1
2
3
4
5
6
7
8
9
10
(defn hlayer-deltas [odeltas neurons strengths]
  (* (mapv dactivation-fn neurons)
     (mapv #(reduce + %)
           (* odeltas strengths))))

(hlayer-deltas
    odeltas
    new-hidden-neurons
    hidden-output-strengths)
;=>  [0.14982559238071416 0.027569216735265096 0.018880751432503236]

Great let’s remember the hidden layer error deltas for later

1
2
3
4
(def hdeltas (hlayer-deltas
              odeltas
              new-hidden-neurons
              hidden-output-strengths))

Updating the connection strengths

Great! We have all the error deltas, now we are ready to go ahead and update the connection strengths. In general this is the same process for both the hidden-output connections and the input-hidden connections.

  • weight-change = error-delta * neuron-value
  • new-weight = weight + learning rate * weight-change

The learning rate controls how fast the weigths and errors should be adjusted. It the learning rate is too high, then there is the danger that it will converge to fit the solution too fast and not find the best solution. If the learning rate is too low, it may never actually converge to the right solution given the training data that it is using. For this example, let’s use a training rate of 0.2

1
2
3
(defn update-strengths [deltas neurons strengths lrate]
  (+ strengths (* lrate
                  (mapv #(* deltas %) neurons))))

Update the hidden-output strengths

Updating this layer we are going to look at

  • weight-change = odelta * hidden value
  • new-weight = weight + (learning rate * weight-change)
1
2
3
4
5
6
7
8
(update-strengths
       odeltas
       new-hidden-neurons
       hidden-output-strengths
       learning-rate)
;=> [[0.14944734341306073 0.18320832546991603]
    [0.019086634528619688 0.06835597662949369]
    [0.009401783798869296 0.04512156124675721]]

Of course, let’s remember these values too

1
2
3
4
5
6
(def new-hidden-output-strengths
  (update-strengths
       odeltas
       new-hidden-neurons
       hidden-output-strengths
       learning-rate))

Update the input-hidden strengths

We are going to do the same thing with the input-hidden strengths too.

  • weight-change = hdelta * input value
  • new-weight = weight + (learning rate * weight-change)
1
2
3
4
5
6
7
 (update-strengths
           hdeltas
           input-neurons
           input-hidden-strengths
           learning-rate)
;=>  [[0.14996511847614283 0.20551384334705303 0.13377615028650064]
           [0.01 0.02 0.03]]

These are our new strengths

1
2
3
4
5
6
(def new-input-hidden-strengths
  (update-strengths
       hdeltas
       input-neurons
       input-hidden-strengths
       learning-rate))

Putting the pieces together

We have done it! In our simple example we have:

  • Forward propagated the input to get the output
  • Calculated the errors from the target through backpropogation
  • Updated the connection strengths/ weights

We just need to put all the pieces together. We’ll do this with the values that we got earlier to make sure it is all working.

Construct a network representation

It would be nice if we could represent an entire neural network in a data structure. That way the whole transformation of feeding forward and training the network could give us a new network back. So lets define the data structure as [input-neurons input-hidden-strengths hidden-neurons hidden-output-strengths output-neurons].

We will start off with all the values of the neurons being zero.

1
2
(def nn [ [0 0] input-hidden-strengths hidden-neurons
hidden-output-strengths [0 0]])

Generalized feed forward

Now we can make a feed forward function that takes this network and constructs a new network based on input values and the layer-activation function that we defined earlier.

1
2
3
4
5
(defn feed-forward [input network]
  (let [[in i-h-strengths h h-o-strengths out] network
        new-h (layer-activation input i-h-strengths)
        new-o (layer-activation new-h h-o-strengths)]
    [input i-h-strengths new-h h-o-strengths new-o]))

This should match up with the values that we got earlier when we were just working on the individual pieces.

1
2
3
(testing "feed forward"
  (is (== [input-neurons input-hidden-strengths new-hidden-neurons hidden-output-strengths new-output-neurons]
          (feed-forward [1 0] nn))))

`

Generalized update weights / connection strengths

We can make a similiar update-weights function that calculate the errors and returns back a new network with the updated weights

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
(defn update-weights [network target learning-rate]
  (let [[ in i-h-strengths h h-o-strengths out] network
        o-deltas (output-deltas target out)
        h-deltas (hlayer-deltas o-deltas h h-o-strengths)
        n-h-o-strengths (update-strengths
                         o-deltas
                         h
                         h-o-strengths
                         learning-rate)
        n-i-h-strengths (update-strengths
                         h-deltas
                         in
                         i-h-strengths
                         learning-rate)]
    [in n-i-h-strengths h n-h-o-strengths out]))

This too should match up with the pieces from the earlier examples.

1
2
3
4
5
6
7
(testing "update-weights"
  (is ( == [input-neurons
            new-input-hidden-strengths
            new-hidden-neurons
            new-hidden-output-strengths
            new-output-neurons]
           (update-weights (feed-forward [1 0] nn) [0 1] 0.2))))

Generalized train network

Now we can make a function that takes input and a target and feeds the input forward and then updates the weights.

1
2
3
4
5
6
7
8
9
10
(defn train-network [network input target learning-rate]
  (update-weights (feed-forward input network) target learning-rate))

(testing "train-network"
  (is (== [input-neurons
            new-input-hidden-strengths
            new-hidden-neurons
            new-hidden-output-strengths
           new-output-neurons]
          (train-network nn [1 0] [0 1] 0.2))))

Try it out!

We are ready to try it out! Let’s train our network on a few examples of inversing the data

1
2
3
4
(def n1 (-> nn
     (train-network [1 0] [0 1] 0.5)
     (train-network [0.5 0] [0 0.5] 0.5)
     (train-network [0.25 0] [0 0.25] 0.5)))

We’ll also make a helper function that just returns the output neurons for the feed-forward function.

1
2
(defn ff [input network]
  (last (feed-forward input network)))

Let’s look at the results of the untrained and the trained networks

1
2
3
4
;;untrained
(ff [1 0] nn) ;=> [0.02315019005321053 0.027608061500083565]
;;trained
(ff [1 0] n1) ;=> [0.03765676393050254 0.10552175312900794]

Whoa! The trained example isn’t perfect, but we can see that it is getting closer to the right answer. It is learning!

MOR Training Data

Well this is really cool and it is working. But it would be nicer to be able to present a set of training data for it to learn on. For example, it would be nice to have a training data structure look like:

1
[ [input target] [input target] ... ]

Let’s go ahead and define that.

1
2
3
4
5
6
7
(defn train-data [network data learning-rate]
  (if-let [[input target] (first data)]
    (recur
     (train-network network input target learning-rate)
     (rest data)
     learning-rate)
    network))

Let’s try that out on the example earlier

1
2
3
4
5
6
7
8
(def n2 (train-data nn [
                        [[1 0] [0 1]]
                        [[0.5 0] [0 0.5]]
                        [[0.25 0] [0 0.25] ]
                        ]
                    0.5))

(ff [1 0] n2) ;=> [0.03765676393050254 0.10552175312900794]

Cool. We can now train on data sets. That means we can construct data sets out of infinite lazy sequences too. Let’s make a lazy training set of inputs and their inverse.

1
2
3
(defn inverse-data []
  (let [n (rand 1)]
    [[n 0] [0 n]]))

Let’s see how well our network is doing after we train it with some more data

1
2
3
4
5
(def n3 (train-data nn (repeatedly 400 inverse-data) 0.5))

(ff [1 0] n3) ;=> [-4.958278484025221E-4 0.8211647699205362]
(ff [0.5 0] n3) ;=> [2.1645760787874696E-4 0.5579396715416916]
(ff [0.25 0] n3) ;=> [1.8183385523103048E-4 0.31130601296149013]

Wow. The more examples it sees, the better that network is doing at learning what to do!

General Construct Network

The only piece that we are missing now is to have a function that will create a general neural network for us. We can choose how many input nerurons, hidden neurons, and output neurons and have a network constructed with random weights.

1
2
3
4
5
6
7
8
9
10
(defn gen-strengths [to from]
  (let [l (* to from)]
    (map vec (partition from (repeatedly l #(rand (/ 1 l)))))))

(defn construct-network [num-in num-hidden num-out]
  (vec (map vec [(repeat num-in 0)
             (gen-strengths num-in num-hidden)
             (repeat num-hidden 0)
             (gen-strengths num-hidden num-out)
             (repeat num-out 0)])))

Now we can construct our network from scratch and train it.

1
2
3
(def tnn (construct-network 2 3 2))
(def n5 (train-data tnn (repeatedly 1000 inverse-data) 0.2))
(ff [1 0] n4) ;=> [-4.954958580800465E-4 0.8160149309699489]

And that’s it. We have constucted a neural network with core.matrix

Want more?

I put together a github library based on the neural network code in the posts. It is called K9, named after Dr. Who’s best dog friend. You can find the examples we have gone through in the tests. There is also an example program using it in the examples directory. It learns what colors are based on thier RGB value.

There are a couple web resources I would recommend if you want to look farther as well.

Go forth and create Neural Networks!