<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Deep Learning on Squid's Blog</title><link>https://gigasquidsoftware.com/categories/deep-learning/</link><description>Recent content in Deep Learning on Squid's Blog</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Mon, 15 Mar 2021 09:07:00 +0000</lastBuildDate><atom:link href="https://gigasquidsoftware.com/categories/deep-learning/atom.xml" rel="self" type="application/rss+xml"/><item><title>Breakfast with Zero-Shot NLP</title><link>https://gigasquidsoftware.com/blog/2021/03/15/breakfast-with-zero-shot-nlp/</link><pubDate>Mon, 15 Mar 2021 09:07:00 +0000</pubDate><guid>https://gigasquidsoftware.com/blog/2021/03/15/breakfast-with-zero-shot-nlp/</guid><description>&lt;p&gt;&lt;img loading="lazy" src="https://i.imgflip.com/51ror1.jpg"&gt;&lt;/p&gt;
&lt;p&gt;What if I told you that you could pick up a library model and instantly classify text with arbitrary categories without any training or fine tuning?&lt;/p&gt;
&lt;p&gt;That is exactly what we are going to do with &lt;a href="https://joeddav.github.io/blog/2020/05/29/ZSL.html"&gt;Hugging Face&amp;rsquo;s zero-shot learning model&lt;/a&gt;. We will also be using &lt;a href="https://github.com/clj-python/libpython-clj"&gt;libpython-clj&lt;/a&gt; to do this exploration without leaving the comfort of our trusty Clojure REPL.&lt;/p&gt;
&lt;h3 id="whats-for-breakfast"&gt;What&amp;rsquo;s for breakfast?&lt;/h3&gt;
&lt;p&gt;We&amp;rsquo;ll start off by taking some text from a recipe description and trying to decide if it&amp;rsquo;s for breakfast, lunch or dinner:&lt;/p&gt;</description></item><item><title>Thoughts on AI Debate 2</title><link>https://gigasquidsoftware.com/blog/2020/12/24/thoughts-on-ai-debate-2/</link><pubDate>Thu, 24 Dec 2020 10:59:00 +0000</pubDate><guid>https://gigasquidsoftware.com/blog/2020/12/24/thoughts-on-ai-debate-2/</guid><description>&lt;p&gt;&lt;img loading="lazy" src="https://montrealartificialintelligence.com/aidebate2mosaic1440x720v8.jpg"&gt;&lt;/p&gt;
&lt;h2 id="ai-debate-2-from-montrealai"&gt;AI Debate 2 from Montreal.AI&lt;/h2&gt;
&lt;p&gt;I had the pleasure of watching the second AI debate from Montreal.AI last night. The first AI debate occurred last year between &lt;a href="https://yoshuabengio.org/"&gt;Yoshua Bengio&lt;/a&gt; and &lt;a href="https://en.wikipedia.org/wiki/Gary_Marcus"&gt;Gary Marcus&lt;/a&gt; entitled &lt;a href="https://montrealartificialintelligence.com/aidebate.html"&gt;“The Best Way Forward for AI”&lt;/a&gt; in which Yoshua argued that Deep Learning could achieve General AI through its own paradigm, while Marcus argued that Deep Learning alone was not sufficient and needed a hybrid approach involving symbolics and inspiration from other disciplines.&lt;/p&gt;</description></item><item><title>Hugging Face GPT with Clojure</title><link>https://gigasquidsoftware.com/blog/2020/01/10/hugging-face-gpt-with-clojure/</link><pubDate>Fri, 10 Jan 2020 19:33:00 +0000</pubDate><guid>https://gigasquidsoftware.com/blog/2020/01/10/hugging-face-gpt-with-clojure/</guid><description>&lt;p&gt;&lt;img loading="lazy" src="https://live.staticflickr.com/65535/49364554561_6e4f4d0a51_w.jpg"&gt;&lt;/p&gt;
&lt;p&gt;A new age in Clojure has dawned. We now have interop access to any python library with &lt;a href="https://github.com/cnuernber/libpython-clj"&gt;libpython-clj&lt;/a&gt;.&lt;/p&gt;
&lt;br&gt;
&lt;p&gt;Let me pause a minute to repeat.&lt;/p&gt;
&lt;br&gt;
&lt;p&gt;** You can now interop with ANY python library. **&lt;/p&gt;
&lt;br&gt;
&lt;p&gt;I know. It&amp;rsquo;s overwhelming. It took a bit for me to come to grips with it too.&lt;/p&gt;
&lt;br&gt;
&lt;p&gt;Let&amp;rsquo;s take an example of something that I&amp;rsquo;ve &lt;em&gt;always&lt;/em&gt; wanted to do and have struggled with mightly finding a way to do it in Clojure:&lt;br&gt;
I want to use the latest cutting edge GPT2 code out there to generate text.&lt;/p&gt;</description></item><item><title>Integrating Deep Learning with clojure.spec</title><link>https://gigasquidsoftware.com/blog/2019/10/11/integrating-deep-learning-with-clojure.spec/</link><pubDate>Fri, 11 Oct 2019 13:51:00 +0000</pubDate><guid>https://gigasquidsoftware.com/blog/2019/10/11/integrating-deep-learning-with-clojure.spec/</guid><description>&lt;p&gt;clojure.spec allows you to write specifications for data and use them for validation. It also provides a generative aspect that allows for robust testing as well as an additional way to understand your data through manual inspection. The dual nature of validation and generation is a natural fit for deep learning models that consist of paired discriminator/generator models.&lt;/p&gt;
&lt;br&gt;
&lt;p&gt;&lt;strong&gt;&lt;strong&gt;TLDR: In this post we show that you can leverage the dual nature of clojure.spec&amp;rsquo;s validator/generator to incorporate a deep learning model&amp;rsquo;s classifier/generator.&lt;/strong&gt;&lt;/strong&gt;&lt;/p&gt;</description></item><item><title>Focus On the Generator</title><link>https://gigasquidsoftware.com/blog/2019/09/06/focus-on-the-generator/</link><pubDate>Fri, 06 Sep 2019 18:07:00 +0000</pubDate><guid>https://gigasquidsoftware.com/blog/2019/09/06/focus-on-the-generator/</guid><description>&lt;p&gt;&lt;a data-flickr-embed="true" href="https://www.flickr.com/photos/smigla-bobinski/19705409981/in/album-72157647756733695/" title="SIMULACRA by Karina Smigla-Bobinski"&gt;&lt;img src="https://live.staticflickr.com/330/19705409981_4e0ae93572.jpg" width="500" height="267" alt="SIMULACRA by Karina Smigla-Bobinski"&gt;&lt;/a&gt;&lt;script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"&gt;&lt;/script&gt;&lt;/p&gt;
&lt;p&gt;In this first post of this series, we took a look at a &lt;a href="https://gigasquidsoftware.com/blog/2019/08/16/simple-autoencoder/"&gt;simple autoencoder&lt;/a&gt;. It took and image and transformed it back to an image. Then, we &lt;a href="https://gigasquidsoftware.com/blog/2019/08/30/focus-on-the-discriminator/"&gt;focused in on the disciminator&lt;/a&gt; portion of the model, where we took an image and transformed it to a label. Now, we focus in on the generator portion of the model do the inverse operation: we transform a label to an image. In recap:&lt;/p&gt;</description></item><item><title>Focus on the Discriminator</title><link>https://gigasquidsoftware.com/blog/2019/08/30/focus-on-the-discriminator/</link><pubDate>Fri, 30 Aug 2019 10:16:00 +0000</pubDate><guid>https://gigasquidsoftware.com/blog/2019/08/30/focus-on-the-discriminator/</guid><description>&lt;p&gt;&lt;a data-flickr-embed="true" href="https://www.flickr.com/photos/marcomagrini/698692268/in/photolist-24JYSq-hTTAJN-4gjQW9-9GRKCW-4gfNhz-x2yZ-6Nnwy1-6Lm68p-66BVjW-8hawRk-4sE2Jz-5Z6uvQ-6B4iH3-qzDvGU-aNpvLT-9UFZLh-egKvNt-bMh6PR-ceG9AL-gDqtze-96JhRW-7EWMH6-3MTfDt-9rUJ4W-dFPssj-8LLrys-aDAda3-9rUJ45-7xLAFR-prSHik-7yDFHC-7erqEc-6YJx8e-39SyR4-dkQnGi-7hy6zT-4UokrH-hkMoBr-9tBN3K-jq8Bpu-aDMSk2-pwQdmt-9tFrUD-6TzF6G-WDAsCC-8Mm4tD-8M8hyS-4yzkGK-67MPUw-crfg" title="sunflowers"&gt;&lt;img src="https://live.staticflickr.com/1007/698692268_b31d429272.jpg" width="500" height="325" alt="sunflowers"&gt;&lt;/a&gt;&lt;script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"&gt;&lt;/script&gt;&lt;/p&gt;
&lt;p&gt;In the &lt;a href="https://gigasquidsoftware.com/blog/2019/08/16/simple-autoencoder/"&gt;last post&lt;/a&gt;, we took a look at a simple autoencoder. The autoencoder is a deep learning model that takes in an image and, (through an encoder and decoder), works to produce the same image. In short:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Autoencoder: image -&amp;gt; image&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For a discriminator, we are going to focus on only the first half on the autoencoder.&lt;/p&gt;
&lt;p&gt;&lt;img alt="discriminator" loading="lazy" src="https://live.staticflickr.com/65535/48647347383_9577b7b672_b.jpg"&gt;&lt;/p&gt;
&lt;p&gt;Why only half? We want a different transformation. We are going to want to take an image as input and then do some &lt;em&gt;discrimination&lt;/em&gt; of the image and classify what type of image it is. In our case, the model is going to input an image of a handwritten digit and attempt to decide which number it is.&lt;/p&gt;</description></item><item><title>Simple Autoencoder</title><link>https://gigasquidsoftware.com/blog/2019/08/16/simple-autoencoder/</link><pubDate>Fri, 16 Aug 2019 16:16:00 +0000</pubDate><guid>https://gigasquidsoftware.com/blog/2019/08/16/simple-autoencoder/</guid><description>&lt;p&gt;&lt;a data-flickr-embed="true" href="https://www.flickr.com/photos/horlik/2901925672/in/photolist-5qr8pf-qkv3m8-32RwmC-dZBC2B-ja8ch-48vDg-f56TGS-oUfNKn-652ZqG-QnCrbX-y3C828-jeGkmu-dxwE9L-jKaGtZ-haQ6j3-61w8UJ-WmitYz-tLymA-dZCHC4-CGvx3R-CC3GPE-BSxzda-eu625R-vHAgnk-cR7WAE-jZiLgu-BsZwLP-fhfvPT-dN1Rf9-o8Mkby-8zDocw-5DvC7S-CEij58-oaw922-akUgeW-ayQiGU-aay1vS-2fVFske-2eoRpCe-rqwa4o-9VJPtv-opgEcq-MDfFe-9yzUaK-4is9Z9-cutXnm-f9U23-L7hpoe-3i3H-enSJKf" title="Perfect mirror"&gt;&lt;img src="https://live.staticflickr.com/3274/2901925672_325f5faeb8.jpg" width="500" height="364" alt="Perfect mirror"&gt;&lt;/a&gt;&lt;script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"&gt;&lt;/script&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;If you look long enough into the autoencoder, it looks back at you.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;The Autoencoder is a fun deep learning model to look into. Its goal is simple: given an input image, we would like to have the same output image.&lt;/p&gt;
&lt;p&gt;It&amp;rsquo;s sort of an identity function for deep learning models, but it is composed of two parts: an encoder and decoder, with the encoder translating the images to a &lt;em&gt;latent space representation&lt;/em&gt; and the encoder translating that back to a regular images that we can view.&lt;/p&gt;</description></item><item><title>Clojure MXNet April Update</title><link>https://gigasquidsoftware.com/blog/2019/04/26/clojure-mxnet-april-update/</link><pubDate>Fri, 26 Apr 2019 15:51:00 +0000</pubDate><guid>https://gigasquidsoftware.com/blog/2019/04/26/clojure-mxnet-april-update/</guid><description>&lt;p&gt;Spring is bringing some beautiful new things to the &lt;a href="http://mxnet.incubator.apache.org/"&gt;Clojure MXNet&lt;/a&gt;. Here are some highlights for the month of April.&lt;/p&gt;
&lt;h2 id="shipped"&gt;Shipped&lt;/h2&gt;
&lt;p&gt;We&amp;rsquo;ve merged &lt;a href="https://github.com/apache/incubator-mxnet/pulls?utf8=%E2%9C%93&amp;amp;q=is%3Apr+is%3Aclosed+clojure"&gt;10 PRs&lt;/a&gt; over the last month. Many of them focus on core improvements to documentation and usability which is very important.&lt;/p&gt;
&lt;p&gt;The MXNet project is also preparing a new release &lt;code&gt;1.4.1&lt;/code&gt;, so keep on the lookout for that to hit in the near future.&lt;/p&gt;
&lt;h2 id="clojure-mxnet-made-simple-article-series"&gt;Clojure MXNet Made Simple Article Series&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://arthurcaillau.com/about/"&gt;Arthur Caillau&lt;/a&gt; added another post to his fantastic series - &lt;a href="https://arthurcaillau.com/mxnet-made-simple-pretrained-models/"&gt;MXNet made simple: Pretrained Models for image classification - Inception and VGG&lt;/a&gt;&lt;/p&gt;</description></item><item><title>Clojure MXNet March Update</title><link>https://gigasquidsoftware.com/blog/2019/03/22/clojure-mxnet-march-update/</link><pubDate>Fri, 22 Mar 2019 10:42:00 +0000</pubDate><guid>https://gigasquidsoftware.com/blog/2019/03/22/clojure-mxnet-march-update/</guid><description>&lt;p&gt;I&amp;rsquo;m starting a monthly update for &lt;a href="http://mxnet.incubator.apache.org/"&gt;Clojure MXNet&lt;/a&gt;. The goal is to share the progress and exciting things that are happening in the project and our community.&lt;/p&gt;
&lt;p&gt;Here&amp;rsquo;s some highlights for the month of March.&lt;/p&gt;
&lt;h2 id="shipped"&gt;Shipped&lt;/h2&gt;
&lt;p&gt;Under the shipped heading, the 1.4.0 release of MXNet has been released, along with the &lt;a href="https://search.maven.org/search?q=clojure%20mxnet"&gt;Clojure MXNet Jars&lt;/a&gt;. There have been improvements to the JVM memory management and an Image API addition. You can see the full list of changes &lt;a href="https://github.com/apache/incubator-mxnet/releases/tag/1.4.0#clojure"&gt;here&lt;/a&gt;&lt;/p&gt;</description></item><item><title>Object Detection with Clojure MXNet</title><link>https://gigasquidsoftware.com/blog/2019/01/19/object-detection-with-clojure-mxnet/</link><pubDate>Sat, 19 Jan 2019 13:34:00 +0000</pubDate><guid>https://gigasquidsoftware.com/blog/2019/01/19/object-detection-with-clojure-mxnet/</guid><description>&lt;p&gt;&lt;img loading="lazy" src="https://c1.staticflickr.com/8/7837/32928474208_4960caafb3.jpg"&gt;&lt;/p&gt;
&lt;p&gt;Object detection just landed in MXNet thanks to the work of contributors &lt;a href="https://github.com/kedarbellare"&gt;Kedar Bellare&lt;/a&gt; and &lt;a href="https://github.com/hellonico/"&gt;Nicolas Modrzyk&lt;/a&gt;. Kedar ported over the &lt;code&gt;infer&lt;/code&gt; package to Clojure, making inference and prediction much easier for users and Nicolas integrated in his &lt;a href="https://github.com/hellonico/origami"&gt;Origami&lt;/a&gt; OpenCV library into the the examples to make the visualizations happen.&lt;/p&gt;
&lt;p&gt;We&amp;rsquo;ll walk through the main steps to use the &lt;code&gt;infer&lt;/code&gt; object detection which include creating the detector with a model and then loading the image and running the inference on it.&lt;/p&gt;</description></item><item><title>How to GAN a Flan</title><link>https://gigasquidsoftware.com/blog/2018/12/18/how-to-gan-a-flan/</link><pubDate>Tue, 18 Dec 2018 16:34:00 +0000</pubDate><guid>https://gigasquidsoftware.com/blog/2018/12/18/how-to-gan-a-flan/</guid><description>&lt;p&gt;It&amp;rsquo;s holiday time and that means parties and getting together with friends. Bringing a baked good or dessert to a gathering is a time honored tradition. But what if this year, you could take it to the next level? Everyone brings actual food. But with the help of Deep Learning, you can bring something completely different - you can bring the &lt;em&gt;image&lt;/em&gt; of baked good! I&amp;rsquo;m not talking about just any old image that someone captured with a camera or created with a pen and paper. I&amp;rsquo;m talking about the computer itself &lt;strong&gt;creating&lt;/strong&gt;. This image would be never before seen, totally unique, and crafted by the creative process of the machine.&lt;/p&gt;</description></item><item><title>Clojure MXNet - The Module API</title><link>https://gigasquidsoftware.com/blog/2018/07/05/clojure-mxnet-the-module-api/</link><pubDate>Thu, 05 Jul 2018 19:39:00 +0000</pubDate><guid>https://gigasquidsoftware.com/blog/2018/07/05/clojure-mxnet-the-module-api/</guid><description>&lt;p&gt;&lt;img loading="lazy" src="https://cdn-images-1.medium.com/max/800/1*OoqsrMD7JzXAvRUGx_8_fg.jpeg"&gt;&lt;/p&gt;
&lt;p&gt;This is an introduction to the high level Clojure API for deep learning library &lt;a href="http://mxnet.incubator.apache.org/"&gt;MXNet&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The module API provides an intermediate and high-level interface for performing computation with neural networks in MXNet.&lt;/p&gt;
&lt;p&gt;To follow along with this documentation, you can use this namespace to with the needed requires:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-clojure" data-lang="clojure"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;(&lt;span style="color:#66d9ef"&gt;ns &lt;/span&gt;docs.module
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; (&lt;span style="color:#e6db74"&gt;:require&lt;/span&gt; [clojure.java.io &lt;span style="color:#e6db74"&gt;:as&lt;/span&gt; io]
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; [clojure.java.shell &lt;span style="color:#e6db74"&gt;:refer&lt;/span&gt; [sh]]
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; [org.apache.clojure-mxnet.eval-metric &lt;span style="color:#e6db74"&gt;:as&lt;/span&gt; eval-metric]
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; [org.apache.clojure-mxnet.io &lt;span style="color:#e6db74"&gt;:as&lt;/span&gt; mx-io]
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; [org.apache.clojure-mxnet.module &lt;span style="color:#e6db74"&gt;:as&lt;/span&gt; m]
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; [org.apache.clojure-mxnet.symbol &lt;span style="color:#e6db74"&gt;:as&lt;/span&gt; sym]
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt; [org.apache.clojure-mxnet.ndarray &lt;span style="color:#e6db74"&gt;:as&lt;/span&gt; ndarray]))
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="prepare-the-data"&gt;Prepare the Data&lt;/h2&gt;
&lt;p&gt;In this example, we are going to use the MNIST data set. If you have cloned the MXNet repo and &lt;code&gt;cd contrib/clojure-package&lt;/code&gt;, we can run some helper scripts to download the data for us.&lt;/p&gt;</description></item><item><title>Clojure MXNet Joins the Apache MXNet Project</title><link>https://gigasquidsoftware.com/blog/2018/07/01/clojure-mxnet-joins-the-apache-mxnet-project/</link><pubDate>Sun, 01 Jul 2018 10:44:00 +0000</pubDate><guid>https://gigasquidsoftware.com/blog/2018/07/01/clojure-mxnet-joins-the-apache-mxnet-project/</guid><description>&lt;p&gt;&lt;img loading="lazy" src="https://cdn-images-1.medium.com/max/800/1*OoqsrMD7JzXAvRUGx_8_fg.jpeg"&gt;&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;m delighted to share the news that the Clojure package for &lt;a href="https://mxnet.apache.org/"&gt;MXNet&lt;/a&gt; has now joined the main Apache MXNet project. A big thank you to the efforts of everyone involved to make this possible. Having it as part of the main project is a great place for growth and collaboration that will benefit both MXNet and the Clojure community.&lt;/p&gt;
&lt;h2 id="invitation-to-join-and-contribute"&gt;Invitation to Join and Contribute&lt;/h2&gt;
&lt;p&gt;The Clojure package has been brought in as a &lt;em&gt;contrib&lt;/em&gt; &lt;a href="https://github.com/apache/incubator-mxnet/tree/master/contrib/clojure-package"&gt;clojure-package&lt;/a&gt;. It is still very new and will go through a period of feedback, stabilization, and improvement before it graduates out of contrib.&lt;/p&gt;</description></item><item><title>Meet Clojure MXNet - NDArray</title><link>https://gigasquidsoftware.com/blog/2018/06/03/meet-clojure-mxnet-ndarray/</link><pubDate>Sun, 03 Jun 2018 16:13:00 +0000</pubDate><guid>https://gigasquidsoftware.com/blog/2018/06/03/meet-clojure-mxnet-ndarray/</guid><description>&lt;p&gt;&lt;img loading="lazy" src="https://cdn-images-1.medium.com/max/800/1*OoqsrMD7JzXAvRUGx_8_fg.jpeg"&gt;&lt;/p&gt;
&lt;p&gt;This is the beginning of a series of blog posts to get to know the &lt;a href="https://mxnet.apache.org/"&gt;Apache MXNet&lt;/a&gt; Deep Learning project and the new Clojure language binding &lt;a href="https://github.com/apache/incubator-mxnet/tree/master/contrib/clojure-package"&gt;clojure-package&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;MXNet is a first class, modern deep learning library that AWS has officially picked as its chosen library. It supports multiple languages on a first class basis and is incubating as an Apache project.&lt;/p&gt;
&lt;p&gt;The motivation for creating a Clojure package is to be able to open the deep learning library to the Clojure ecosystem and build bridges for future development and innovation for the community. It provides all the needed tools including low level and high level apis, dynamic graphs, and things like GAN and natural language support.&lt;/p&gt;</description></item></channel></rss>