Friday, December 18, 2020

Deep Learning From Java and Scala

Deep learning has been dominated by Python for years. It has been much harder to do deep learning on the JVM, but recently there has been some improvements. Here is a brief comparison of popular options going into 2021.

  • Deeplearning4j
  • DJL, Deep Java Library
  • MXLib Java and Scala bindings
  • PyTorch Java bindings
  • TensorFlow Java bindings
  • TensorFlow Scala

Bindings and Portability

Python is great for data exploration and for building models. Direct JVM access to deep learning is great for development and deployment to servers or Spark. It is easier than setting up a Python micro service.

Only Deeplearning4j is native to JVM, the others are wrappers to C++ code. Java binding to C++ or Fortran code is less portable than normal Java code. A big issue with these libraries is how well they package up the C++ library for use from Java. Do they have pre-compiled jar files for your platform, or do you need to run install scripts? Are these install scripts well documented and maintained.


Deeplearning4j

GitHub Stars: 12k

Deeplearning4j is the only native Java deep learning library, giving it a concept and portability advantage. It is a little verbose as Java often is.

Active and popular but less popular than the domineering PyTorch and TensorFlow.


DJL, Deep Java Library

Sponsor: Amazon

GitHub Stars: 1.5k

DJL is a new and very active. It wraps other libraries MXLib, ONNX, PyTorch and TensorFlow. It has good documentation.

DJL has a very high abstraction level. DJL can load models from a model zoo from different underlying libraries. If you just want to take some trained model and run it in production, you can pick and choose models written for different libraries from the same code. 

On the other hand, if you are training your model, then having an extra abstraction layer around classes makes it harder to build and train a model.

DJL object detection using model zoo


MXLib Java and Scala Bindings

Sponsors: Amazon and Microsoft

GitHub Stars: 19k

MXLib supports a lot of languages and there is good documentation for each of them.

Active and popular but less popular than the domineering PyTorch and TensorFlow.

 

PyTorch Java Bindings

Sponsor: Facebook

GitHub Stars: 50k

PyTorch is the second most popular deep learning framework. It has changed less than TensorFlow. It represents models as a dynamic computation graph, which is easy to program especially for complex dynamic models.

PyTorch Java bindings is part of PyTorch, the install is platform specific and requires several steps. It has an example project, but doesn't seem very active. There is quite a bit of documentation about Android Java development.


TensorFlow Java Bindings

Sponsor: Google

GitHub Stars: 152k

TensorFlow is the most popular deep learning framework with a giant ecosystem. TensorFlow v1.x had a steep learning curve. It has gone through many changes, making it a moving target for Python / Java programmers.

TensorFlow Java is part of TensorFlow project. It has dependencies for Linux, macOS and Windows packaged up in jar file and installs cleanly on those platforms. It is unclear how popular it is; a lot of the documentation is referring to legacy Java bindings and there is little documentation about new Java bindings, that only has 0.2k GitHub stars.

Understanding TensorFlow's architecture with graphs and sessions is important for Java bindings. Here is a lecture explaining it.



TensorFlow Scala 

GitHub Stars: 0.8k

TensorFlow Scala is a low-level idiomatic wrapper around TensorFlow. It has a lot of high-quality Scala code and is actively developed. 

TensorFlow represents its computation graph with Protobuf. This makes it more language agnostic.  TensorFlow Scala has idiomatic abstraction around that.

TensorFlow Scala code is keeping up with TensorFlow version. There are precompiled binary jar files for Linux, Mac and Windows. Documentation is sparse so be prepared to read source code.


Conclusion

There have been big improvements to deep learning from JVM languages like Java, Kotlin and Scala, but the quality is substantially below C++ / Python versions. The documentation is still spotty, and bindings are often behind C++ / Python libraries. But the binding should be good enough to run ML in production code.


Test / Starter Projects

Here are the Scala test projects I used to check if the bindings were working and cross platform. It took some experimenting to get these to work.


DJL, Deep Java Library

https://github.com/sami-badawi/scaladl

Object detection calling TensorFlow model zoo threw exception.


PyTorch Java bindings

https://github.com/sami-badawi/java-demo


TensorFlow Java bindings

https://github.com/sami-badawi/tensorzoo

Java bindings are using Java generics that are pretty different from Scala generics.


TensorFlow Scala

https://github.com/sami-badawi/tf_scala_ex


Disclaimer

Apologies for omissions and open to corrections.

 

 


Friday, April 10, 2020

How Many Languages Should You Program In

I love programming languages. Some would say that I am a language addict. I have programmed in a lot of languages, and written blog posts promoting their use.

What is a reasonable number of languages to program in?

For production code my answer is almost always:

Less is more

There is a tricky balance between innovation and stability in software engineering. This post has a few metrics and some hand-wavy advice on language use.


Too Many Languages


Projects using a lot of languages are the worst. Their lack of discipline makes them hard to understand and maintain. You have a deadline but you keep getting dragged into rabbit holes. Their main benefit is resume building. Often projects with many languages also have:

  • Several different NoSQL and SQL databases
  • Every web or Microsoft framework that was cool at some point
  • Every service on AWS

Return on Investment


A good metric for whether you should add a new language to your project is to look at the ROI, return on investment. Learning a new language is usually pretty easy, but learning the build system and the ecosystem is a lot harder. We have good connectivity from languages to SQL database, but getting more languages working closely together is tedious. You need a strong value proposition to add a new language.


Language Specialization


The best reason to use several languages is that you are forced to use a given language category.

  • Statically compiled back-end language
  • Scripting language
  • Front-end language 
  • ML / Numeric language
  • Non garbage collected system language

Often the libraries for a given domain are written in one or a few languages. For instance computer vision libraries are written in C++ or Python. Then you are forced to use them.

Redundancy


In scripting languages my preference is Python, but I will happily use Groovy, Perl and Ruby. Using similar scripting languages on the same project feels messy.

Using several languages inside an ecosystem say Java and Scala or C# and F# causes less friction.

Language Tool-belt


I have to be fluent in a few languages for work and I have limited capacity, but once a year I will try out a new language for a while and see if it has staying power. Most of them don't but it keeps my skill set up to date. When my boss asks me to spend a couple of days updating an old throwaway React project I get a running start.


Metrics from A.I. and ML


These two concepts from artificial intelligence are relevant to the adoption of programming languages and evolution of long lived software systems.
  • Learning rate
  • Multi armed bandit algorithm
Learning rate is how fast you change the weight of your neural network after each training run. If you choose a high value, your neural network jumps erratic and doesn't learn, if the learning rate is too low it moves too slow and doesn't learn. More sophisticated algorithms like Adam start with a high learning rate that gets smaller for a trained system.

Multi armed bandit algorithm is for choosing what stories you should show on the front page of a news site. The gist is that you should show popular stories, but you give a percentage of the space to new stories for a chance to become the popular stories.

I am in the flow when I use 10% of my time learning and 90% working.


To Add or Not to Add


Learning a new language is fun. It teaches you new ways to think.

If you want to add a new language to an established project you should be familiar with both language and ecosystem, and expect a substantial productivity or performance gain before it is worth the overhead.

Otherwise if you want to get serious with a new language do open source work or use it on smaller projects.


Saturday, March 21, 2020

Haskell IDE 2020

Haskell tooling has improved, but getting an IDE-like setup is still tricky. It took me some trial and error finding a good Haskell environment. I tried 5 modern libraries implementing IDE functionality for Haskell:

  • Intero
  • haskell-ide-engine (HIE )
  • haskell-language-server
  • Spacemacs Haskell Layer
  • SpaceVim Haskell Layer


Intero


I had good experience combining Intero and Haskero VS Code plugin. It is not great but I got it to work with syntax highlighting, code completion and goto definitions.



Intero is based on a fork of the GHC compiler and a downside is that Intero is no longer maintained, but it works up till GHC 8.6 the second last version of the GHC compiler.

Intero Installation


  • Install Stack
  • Install Intero using Stack
  • Install the Haskero VS Code plugin
  • Create a project that is using GHC 8.6
  • Open VS Code in the project


Creating New Project


stack install intero
export PATH=$PATH:~/.local/bin/
stack new myproject --resolver lts-14.27
cd myproject
code .


haskell-ide-engine (HIE)


haskell-ide-engine is currently the most advanced IDE project for Haskell. It is using the LSP, the language server protocol that was started on VS Code. HIE should work with editors supporting LSP.


HIE with VS Code


Here is a post about getting HIE working with VS Code on the Mac. It kept crashing on me but recently it has been more stable. Adding a hie.yaml file sometimes helps.




HIE with Neovim


HIE works with Neovim without too much work. Here is what I did:



Install HIE


git clone https://github.com/haskell/haskell-ide-engine --recursive
cd haskell-ide-engine
stack ./install.hs hie-8.6.5


Install Neovim with LSP Support


I used Neovim 0.5 beta with builtin LSP, language server protocol.

You can also do:
brew install neovim

and install vim-lsp coc.


Configure Neovim to Work with HIE


Add the following to your config file:
~/.config/nvim/init.vim

call plug#begin('~/.vim/plugged')
Plug 'scrooloose/nerdtree'{ 'on':  'NERDTreeToggle' }
Plug 'autozimu/LanguageClient-neovim'{
      \ 'branch''next',
      \ 'do''./install.sh'
      \ }
call plug#end()
let g:LanguageClient_serverCommands = { 'haskell': ['hie-wrapper''--lsp'}
nnoremap :call LanguageClient_contextMenu()
" Or map each action separately                  
nnoremap K :call LanguageClient#textDocument_hover()
nnoremap gd :call LanguageClient#textDocument_definition()
nnoremap :call LanguageClient#textDocument_rename()


Retro with Neovim


Neovim is more complicated than I like an editor to be. However with LSP integration Vim and Neovim are providing power that justifies a small learning curve.

Programming Haskell in Neovim brings me back to computing in the 1980s, before we had GUI there were still very powerful development environments running in very little memory.


haskell-language-server


The long awaited haskell-language-server is starting to work. I got it to work for a simple GCH 8.6 and GHC 8,8 project. It looks good and is full featured when it works.



Install haskell-language-server


export PATH=$PATH:~/.local/bin
git clone https://github.com/haskell/haskell-language-server --recurse-submodules
cd haskell-language-server
stack ./install.hs help
stack ./install.hs hls
stack ./install.hs data
stack ./install.hs hls-8.6.5


VS Code setting


Integration with VS Code still seems immature.

Problems with Stack and manually edited cabal file

I am using Stack as my build tool, but I also had a manually edited cabal file. When I deleted my cabal file and generated it from package.yaml it worked better.


Spacemacs Haskell Layer


It had a good experience using the Spacemacs Haskell layer.





Install a newer Emacs and install Spacemacs. Press the following four keys to get to the config file:
"space" f e d

You should add haskell to the list of layers. Here is my layers list:

   dotspacemacs-configuration-layers
   '(
     html
     yaml     

     helm
     auto-completion

     emacs-lisp
     git
     haskell
     markdown
     org
     python
     spell-checking

     )

There are a few Haskell packages that need to be installed. You can try this:

export PATH=$PATH:~/.local/bin/
stack new myproject --resolver lts-14.27
cd myproject
stack install apply-refact hlint hasktags hoogle

git clone git@github.com:jaspervdj/stylish-haskell.git
cd stylish-haskell
stack install

Doing the install under a project will make it reuse the resolver for that project.

When I did my install, stylish-haskell had an version conflict problem, so I had to do a git clone of stylish-haskell and installed from there instead.


SpaceVim Haskell Layer


It took a little work to get SpaceVim installed on Windows. First I installed Neovim with Scoop:

scoop install neovim

SpaceVim is a configuration for Vim and Neovim. The main idea in SpaceVim is that you hit the space bar and it will show you what options you have.

The Haskell Layer worked quite well and looked good. I used the new Windows Terminal with split screen and a stack build loop in the other pane.


Configure Neovim / SpaceVim


Installing Spacevim Haskell Layer was very easy. Just add these 2 lines to ~/.SpaceVim/init.toml:

[[layers]]
  name = "lang#haskell"



OS for testing


Libraries should generally be cross platform. This is what I tested on.

OS X and Windows 10

Intero and SpaceVim Haskell layer.

OS X

haskell-ide-engine and Spacemacs Haskell layer.

But they should probably also work on Linux, WSL etc.


Conclusion


Haskell already has an intimidating learning curve. With immature tooling Haskell is a language for language researchers and diehard hackers.

Haskell tooling has gotten much better, but I am spoiled and I prefer to work in an IDE-like environment.

Haskell does not have a first class IDE like IntelliJ for Java, but all libraries provide a pleasant development environment. They are not super stable, and I find myself going back and forth between them depending on the project.

Haskell is now ready for casual users to explore a pure functional language and see if they find mathematical enlightenment.

Saturday, February 22, 2020

Haskell and Hadoop the Aftermath

In 2012 Haskell and Hadoop were the hottest technologies. They had a lot of hype and I loved them. Both were based on functional programming and built on towering abstractions.

Elite functional programmers used Haskell. Serious tech startups had to use big data, meaning Hadoop. Three years later I had learned Haskell and Hadoop and my top advice to startups was:

Don't use Haskell or Hadoop!

They won't you give you a competitive advantage they will just slow you down.

That was my personal experience. For years after that I avoided jobs involving Hadoop, but for the last couple of years I have mainly been working in Hadoop with Spark. It's now solid and very productive.

I found the productivity increase quite remarkable. Some of it is a textbook example of technology life-cycle, but a some of it comes down to understanding the power and limitation of functional programming.


Modern Programming Paradigms


There are three main modern programming paradigms:
  • Object oriented
  • Functional
  • Declarative

Object oriented programming gives you fine-grained control. Functional programming uses transformations with less control. In declarative programming you just write queries and you have little control. The higher the abstraction the less control.


Essential Hadoop



The breakthrough that Hadoop / MapReduce made was that by using functional programming transformation you could distribute a computation over thousands of computers in a fault tolerant way. This was a monumental achievement, but what made Hadoop the dominant data platform it is today was that it later combined functional with the declarative programming available in Spark SQL, HIVE or PIG.

Combined functional and declarative programming was once the holy grail in computing, but nobody knew how to do it. Today it is ubiquitous and it is free, until you get the bill from your cloud provider.



Essential Haskell



I expected Haskell to be a mathematical version of Python. It was not. If you are trying to do object oriented programming in Haskell it will cause you a lot of pain. Unlike doing OOP in hybrid languages like F#, OCaml or Scala.

The power of Haskell is that it limits you to a small set of basic operations that compose. This allows you to build a big machine out of simple parts. The lazy evaluation makes it natural to work on infinite streams of data. The powerful type system makes it possible to connect small pieces of code in many different dimensions. My metaphor is:

Haskell is an extra dimensional Lego set

Haskell started as a playground for language researchers experimenting. I wanted to play with all these shiny theoretical toys. That was a big time sink and a part of the reason it took me a long time to learn.


Common Problems


One reason I gave up on both Haskell and Hadoop was that it was hard to get things done. Both were beautiful abstractions built on a tower of unstable software libraries. Everything was evolving quickly. This made it hard to keep the libraries underneath on compatible version. Every time your Hadoop distribution was updated your code would break.

In Haskell this problem was called Cabal Hell after the build system Cabal. There were simple solutions. Haskell now has stable versions of libraries that work with each other. It has a modern build system called Stack. Now tooling in both Haskell and Hadoop is quite good.


The Aftermath


I spent more time and effort learning Haskell and Hadoop than any other technologies. With that much effort I expected them to give me superpowers. Instead they slowed me down. This caused a backlash. I felt naive for jumping on the Haskell and Hadoop bandwagon and wasting so much time.

Now 8 years later the dust has settled and part of my problem was that I was an early adapter of immature technologies. Haskell and Hadoop are now mature but inherently complex technologies. They draw their power from giving up fine control. Instead they let you build machines that you can pipe data through.

Big data in the case of Hadoop. Infinite data in the case of Haskell.

Hadoop is highly successful, and is now a cornerstone of data engineering. Even though it is currently standing in the shadow of Spark that was built to run on top of Hadoop infrastructure.

Haskell is a practical programming language well suited for constructive mathematics and category theory, but it is not a better version of Scala. It is pretty successful at number 19 on RedMonk programming language ranking and is used in industry.