Friday, December 18, 2020

Deep Learning From Java and Scala

Deep learning has been dominated by Python for years. It has been much harder to do deep learning on the JVM, but recently there has been some improvements. Here is a brief comparison of popular options going into 2021.

  • Deeplearning4j
  • DJL, Deep Java Library
  • MXLib Java and Scala bindings
  • PyTorch Java bindings
  • TensorFlow Java bindings
  • TensorFlow Scala

Bindings and Portability

Python is great for data exploration and for building models. Direct JVM access to deep learning is great for development and deployment to servers or Spark. It is easier than setting up a Python micro service.

Only Deeplearning4j is native to JVM, the others are wrappers to C++ code. Java binding to C++ or Fortran code is less portable than normal Java code. A big issue with these libraries is how well they package up the C++ library for use from Java. Do they have pre-compiled jar files for your platform, or do you need to run install scripts? Are these install scripts well documented and maintained.


Deeplearning4j

GitHub Stars: 12k

Deeplearning4j is the only native Java deep learning library, giving it a concept and portability advantage. It is a little verbose as Java often is.

Active and popular but less popular than the domineering PyTorch and TensorFlow.


DJL, Deep Java Library

Sponsor: Amazon

GitHub Stars: 1.5k

DJL is a new and very active. It wraps other libraries MXLib, ONNX, PyTorch and TensorFlow. It has good documentation.

DJL has a very high abstraction level. DJL can load models from a model zoo from different underlying libraries. If you just want to take some trained model and run it in production, you can pick and choose models written for different libraries from the same code. 

On the other hand, if you are training your model, then having an extra abstraction layer around classes makes it harder to build and train a model.

DJL object detection using model zoo


MXLib Java and Scala Bindings

Sponsors: Amazon and Microsoft

GitHub Stars: 19k

MXLib supports a lot of languages and there is good documentation for each of them.

Active and popular but less popular than the domineering PyTorch and TensorFlow.

 

PyTorch Java Bindings

Sponsor: Facebook

GitHub Stars: 50k

PyTorch is the second most popular deep learning framework. It has changed less than TensorFlow. It represents models as a dynamic computation graph, which is easy to program especially for complex dynamic models.

PyTorch Java bindings is part of PyTorch, the install is platform specific and requires several steps. It has an example project, but doesn't seem very active. There is quite a bit of documentation about Android Java development.


TensorFlow Java Bindings

Sponsor: Google

GitHub Stars: 152k

TensorFlow is the most popular deep learning framework with a giant ecosystem. TensorFlow v1.x had a steep learning curve. It has gone through many changes, making it a moving target for Python / Java programmers.

TensorFlow Java is part of TensorFlow project. It has dependencies for Linux, macOS and Windows packaged up in jar file and installs cleanly on those platforms. It is unclear how popular it is; a lot of the documentation is referring to legacy Java bindings and there is little documentation about new Java bindings, that only has 0.2k GitHub stars.

Understanding TensorFlow's architecture with graphs and sessions is important for Java bindings. Here is a lecture explaining it.



TensorFlow Scala 

GitHub Stars: 0.8k

TensorFlow Scala is a low-level idiomatic wrapper around TensorFlow. It has a lot of high-quality Scala code and is actively developed. 

TensorFlow represents its computation graph with Protobuf. This makes it more language agnostic.  TensorFlow Scala has idiomatic abstraction around that.

TensorFlow Scala code is keeping up with TensorFlow version. There are precompiled binary jar files for Linux, Mac and Windows. Documentation is sparse so be prepared to read source code.


Conclusion

There have been big improvements to deep learning from JVM languages like Java, Kotlin and Scala, but the quality is substantially below C++ / Python versions. The documentation is still spotty, and bindings are often behind C++ / Python libraries. But the binding should be good enough to run ML in production code.


Test / Starter Projects

Here are the Scala test projects I used to check if the bindings were working and cross platform. It took some experimenting to get these to work.


DJL, Deep Java Library

https://github.com/sami-badawi/scaladl

Object detection calling TensorFlow model zoo threw exception.


PyTorch Java bindings

https://github.com/sami-badawi/java-demo


TensorFlow Java bindings

https://github.com/sami-badawi/tensorzoo

Java bindings are using Java generics that are pretty different from Scala generics.


TensorFlow Scala

https://github.com/sami-badawi/tf_scala_ex


Disclaimer

Apologies for omissions and open to corrections.

 

 


Friday, April 10, 2020

How Many Languages Should You Program In

I love programming languages. Some would say that I am a language addict. I have programmed in a lot of languages, and written blog posts promoting their use.

What is a reasonable number of languages to program in?

For production code my answer is almost always:

Less is more

There is a tricky balance between innovation and stability in software engineering. This post has a few metrics and some hand-wavy advice on language use.


Too Many Languages


Projects using a lot of languages are the worst. Their lack of discipline makes them hard to understand and maintain. You have a deadline but you keep getting dragged into rabbit holes. Their main benefit is resume building. Often projects with many languages also have:

  • Several different NoSQL and SQL databases
  • Every web or Microsoft framework that was cool at some point
  • Every service on AWS

Return on Investment


A good metric for whether you should add a new language to your project is to look at the ROI, return on investment. Learning a new language is usually pretty easy, but learning the build system and the ecosystem is a lot harder. We have good connectivity from languages to SQL database, but getting more languages working closely together is tedious. You need a strong value proposition to add a new language.


Language Specialization


The best reason to use several languages is that you are forced to use a given language category.

  • Statically compiled back-end language
  • Scripting language
  • Front-end language 
  • ML / Numeric language
  • Non garbage collected system language

Often the libraries for a given domain are written in one or a few languages. For instance computer vision libraries are written in C++ or Python. Then you are forced to use them.

Redundancy


In scripting languages my preference is Python, but I will happily use Groovy, Perl and Ruby. Using similar scripting languages on the same project feels messy.

Using several languages inside an ecosystem say Java and Scala or C# and F# causes less friction.

Language Tool-belt


I have to be fluent in a few languages for work and I have limited capacity, but once a year I will try out a new language for a while and see if it has staying power. Most of them don't but it keeps my skill set up to date. When my boss asks me to spend a couple of days updating an old throwaway React project I get a running start.


Metrics from A.I. and ML


These two concepts from artificial intelligence are relevant to the adoption of programming languages and evolution of long lived software systems.
  • Learning rate
  • Multi armed bandit algorithm
Learning rate is how fast you change the weight of your neural network after each training run. If you choose a high value, your neural network jumps erratic and doesn't learn, if the learning rate is too low it moves too slow and doesn't learn. More sophisticated algorithms like Adam start with a high learning rate that gets smaller for a trained system.

Multi armed bandit algorithm is for choosing what stories you should show on the front page of a news site. The gist is that you should show popular stories, but you give a percentage of the space to new stories for a chance to become the popular stories.

I am in the flow when I use 10% of my time learning and 90% working.


To Add or Not to Add


Learning a new language is fun. It teaches you new ways to think.

If you want to add a new language to an established project you should be familiar with both language and ecosystem, and expect a substantial productivity or performance gain before it is worth the overhead.

Otherwise if you want to get serious with a new language do open source work or use it on smaller projects.


Saturday, March 21, 2020

Haskell IDE 2020

Haskell tooling has improved, but getting an IDE-like setup is still tricky. It took me some trial and error finding a good Haskell environment. I tried 5 modern libraries implementing IDE functionality for Haskell:

  • Intero
  • haskell-ide-engine (HIE )
  • haskell-language-server
  • Spacemacs Haskell Layer
  • SpaceVim Haskell Layer


Intero


I had good experience combining Intero and Haskero VS Code plugin. It is not great but I got it to work with syntax highlighting, code completion and goto definitions.



Intero is based on a fork of the GHC compiler and a downside is that Intero is no longer maintained, but it works up till GHC 8.6 the second last version of the GHC compiler.

Intero Installation


  • Install Stack
  • Install Intero using Stack
  • Install the Haskero VS Code plugin
  • Create a project that is using GHC 8.6
  • Open VS Code in the project


Creating New Project


stack install intero
export PATH=$PATH:~/.local/bin/
stack new myproject --resolver lts-14.27
cd myproject
code .


haskell-ide-engine (HIE)


haskell-ide-engine is currently the most advanced IDE project for Haskell. It is using the LSP, the language server protocol that was started on VS Code. HIE should work with editors supporting LSP.


HIE with VS Code


Here is a post about getting HIE working with VS Code on the Mac. It kept crashing on me but recently it has been more stable. Adding a hie.yaml file sometimes helps.




HIE with Neovim


HIE works with Neovim without too much work. Here is what I did:



Install HIE


git clone https://github.com/haskell/haskell-ide-engine --recursive
cd haskell-ide-engine
stack ./install.hs hie-8.6.5


Install Neovim with LSP Support


I used Neovim 0.5 beta with builtin LSP, language server protocol.

You can also do:
brew install neovim

and install vim-lsp coc.


Configure Neovim to Work with HIE


Add the following to your config file:
~/.config/nvim/init.vim

call plug#begin('~/.vim/plugged')
Plug 'scrooloose/nerdtree'{ 'on':  'NERDTreeToggle' }
Plug 'autozimu/LanguageClient-neovim'{
      \ 'branch''next',
      \ 'do''./install.sh'
      \ }
call plug#end()
let g:LanguageClient_serverCommands = { 'haskell': ['hie-wrapper''--lsp'}
nnoremap :call LanguageClient_contextMenu()
" Or map each action separately                  
nnoremap K :call LanguageClient#textDocument_hover()
nnoremap gd :call LanguageClient#textDocument_definition()
nnoremap :call LanguageClient#textDocument_rename()


Retro with Neovim


Neovim is more complicated than I like an editor to be. However with LSP integration Vim and Neovim are providing power that justifies a small learning curve.

Programming Haskell in Neovim brings me back to computing in the 1980s, before we had GUI there were still very powerful development environments running in very little memory.


haskell-language-server


The long awaited haskell-language-server is starting to work. I got it to work for a simple GCH 8.6 and GHC 8,8 project. It looks good and is full featured when it works.



Install haskell-language-server


export PATH=$PATH:~/.local/bin
git clone https://github.com/haskell/haskell-language-server --recurse-submodules
cd haskell-language-server
stack ./install.hs help
stack ./install.hs hls
stack ./install.hs data
stack ./install.hs hls-8.6.5


VS Code setting


Integration with VS Code still seems immature.

Problems with Stack and manually edited cabal file

I am using Stack as my build tool, but I also had a manually edited cabal file. When I deleted my cabal file and generated it from package.yaml it worked better.


Spacemacs Haskell Layer


It had a good experience using the Spacemacs Haskell layer.





Install a newer Emacs and install Spacemacs. Press the following four keys to get to the config file:
"space" f e d

You should add haskell to the list of layers. Here is my layers list:

   dotspacemacs-configuration-layers
   '(
     html
     yaml     

     helm
     auto-completion

     emacs-lisp
     git
     haskell
     markdown
     org
     python
     spell-checking

     )

There are a few Haskell packages that need to be installed. You can try this:

export PATH=$PATH:~/.local/bin/
stack new myproject --resolver lts-14.27
cd myproject
stack install apply-refact hlint hasktags hoogle

git clone git@github.com:jaspervdj/stylish-haskell.git
cd stylish-haskell
stack install

Doing the install under a project will make it reuse the resolver for that project.

When I did my install, stylish-haskell had an version conflict problem, so I had to do a git clone of stylish-haskell and installed from there instead.


SpaceVim Haskell Layer


It took a little work to get SpaceVim installed on Windows. First I installed Neovim with Scoop:

scoop install neovim

SpaceVim is a configuration for Vim and Neovim. The main idea in SpaceVim is that you hit the space bar and it will show you what options you have.

The Haskell Layer worked quite well and looked good. I used the new Windows Terminal with split screen and a stack build loop in the other pane.


Configure Neovim / SpaceVim


Installing Spacevim Haskell Layer was very easy. Just add these 2 lines to ~/.SpaceVim/init.toml:

[[layers]]
  name = "lang#haskell"



OS for testing


Libraries should generally be cross platform. This is what I tested on.

OS X and Windows 10

Intero and SpaceVim Haskell layer.

OS X

haskell-ide-engine and Spacemacs Haskell layer.

But they should probably also work on Linux, WSL etc.


Conclusion


Haskell already has an intimidating learning curve. With immature tooling Haskell is a language for language researchers and diehard hackers.

Haskell tooling has gotten much better, but I am spoiled and I prefer to work in an IDE-like environment.

Haskell does not have a first class IDE like IntelliJ for Java, but all libraries provide a pleasant development environment. They are not super stable, and I find myself going back and forth between them depending on the project.

Haskell is now ready for casual users to explore a pure functional language and see if they find mathematical enlightenment.

Saturday, February 22, 2020

Haskell and Hadoop the Aftermath

In 2012 Haskell and Hadoop were the hottest technologies. They had a lot of hype and I loved them. Both were based on functional programming and built on towering abstractions.

Elite functional programmers used Haskell. Serious tech startups had to use big data, meaning Hadoop. Three years later I had learned Haskell and Hadoop and my top advice to startups was:

Don't use Haskell or Hadoop!

They won't you give you a competitive advantage they will just slow you down.

That was my personal experience. For years after that I avoided jobs involving Hadoop, but for the last couple of years I have mainly been working in Hadoop with Spark. It's now solid and very productive.

I found the productivity increase quite remarkable. Some of it is a textbook example of technology life-cycle, but a some of it comes down to understanding the power and limitation of functional programming.


Modern Programming Paradigms


There are three main modern programming paradigms:
  • Object oriented
  • Functional
  • Declarative

Object oriented programming gives you fine-grained control. Functional programming uses transformations with less control. In declarative programming you just write queries and you have little control. The higher the abstraction the less control.


Essential Hadoop



The breakthrough that Hadoop / MapReduce made was that by using functional programming transformation you could distribute a computation over thousands of computers in a fault tolerant way. This was a monumental achievement, but what made Hadoop the dominant data platform it is today was that it later combined functional with the declarative programming available in Spark SQL, HIVE or PIG.

Combined functional and declarative programming was once the holy grail in computing, but nobody knew how to do it. Today it is ubiquitous and it is free, until you get the bill from your cloud provider.



Essential Haskell



I expected Haskell to be a mathematical version of Python. It was not. If you are trying to do object oriented programming in Haskell it will cause you a lot of pain. Unlike doing OOP in hybrid languages like F#, OCaml or Scala.

The power of Haskell is that it limits you to a small set of basic operations that compose. This allows you to build a big machine out of simple parts. The lazy evaluation makes it natural to work on infinite streams of data. The powerful type system makes it possible to connect small pieces of code in many different dimensions. My metaphor is:

Haskell is an extra dimensional Lego set

Haskell started as a playground for language researchers experimenting. I wanted to play with all these shiny theoretical toys. That was a big time sink and a part of the reason it took me a long time to learn.


Common Problems


One reason I gave up on both Haskell and Hadoop was that it was hard to get things done. Both were beautiful abstractions built on a tower of unstable software libraries. Everything was evolving quickly. This made it hard to keep the libraries underneath on compatible version. Every time your Hadoop distribution was updated your code would break.

In Haskell this problem was called Cabal Hell after the build system Cabal. There were simple solutions. Haskell now has stable versions of libraries that work with each other. It has a modern build system called Stack. Now tooling in both Haskell and Hadoop is quite good.


The Aftermath


I spent more time and effort learning Haskell and Hadoop than any other technologies. With that much effort I expected them to give me superpowers. Instead they slowed me down. This caused a backlash. I felt naive for jumping on the Haskell and Hadoop bandwagon and wasting so much time.

Now 8 years later the dust has settled and part of my problem was that I was an early adapter of immature technologies. Haskell and Hadoop are now mature but inherently complex technologies. They draw their power from giving up fine control. Instead they let you build machines that you can pipe data through.

Big data in the case of Hadoop. Infinite data in the case of Haskell.

Hadoop is highly successful, and is now a cornerstone of data engineering. Even though it is currently standing in the shadow of Spark that was built to run on top of Hadoop infrastructure.

Haskell is a practical programming language well suited for constructive mathematics and category theory, but it is not a better version of Scala. It is pretty successful at number 19 on RedMonk programming language ranking and is used in industry.


Sunday, October 20, 2019

F# vs Scala

F# and Scala are both hybrid functional object-oriented languages created for popular virtual machines.

  • F# for CLR / .NET
  • Scala for JVM / Java


F# and Scala are now in more direct competition after Microsoft open sourced F# and .NET Core. They have many similarities but a distinctly different feel. It was hard for me to put my finger on the difference. This blog post investigates their design decisions and use cases. Starting with a brief overview of F# and Scala.


F# (F Sharp)


F Sharp


F# is a mature, open source, cross-platform, functional-first programming language. It was created by Don Syme in 2005 as a port of the OCaml language to .NET.
  • Core: Strict, strong, inferred, hybrid
  • Popularity: Some use in industry and backed by Microsoft
  • Complexity: Easy to learn, but part of a big ecosystem
  • Maturity: It is 14 years old and part of the .NET, so quite mature
  • Tooling: Very good, both .NET based and F# specialized
  • Cross platform: with Mono and .NET Core and JavaScript
  • IDE: Visual Studio, VS Code


Scala


Scala

Scala combines object-oriented and functional programming in one concise, high-level language. It was created by Martin Odersky in 2004.

  • Core: Strict and lazy, nominal and structural, hybrid, implicit for IoC
  • Popularity: Very popular. No 13 on Red Monk June 2019 list. Spark is written in Scala
  • Complexity: It is a quite complex language, but it is easy to get started with
  • Maturity: Very stable. Run on JVM, well integrated with JVM ecosystem
  • Tooling: Great build tool and package managers
  • Cross platform: JVM and JS. Also early work on native / LLVM version
  • IDE: IntelliJ, VS Code, Eclipse


Microsoft Open Source Bet


Choosing between F# and Scala used to be pretty easy. If you were doing Windows development you would use F# if you were on an open source stack you would choose Scala.

In 2012 Microsoft open sourced F# and started porting it to Mono a Microsoft supported cross platform version of the CLR. That was cool but not something I would run production code on.

However, in September 2019 Microsoft released .NET Core 3 an open source cross platform version of a big part of their SDK and also a first release of Apache Spark for .NET.

After this .NET and F# are serious contenders for being parts of an open source stack.


Relation to Java and C#


You might think that F# is just the .NET version of Scala and moving from Java to Scala is similar to moving from C# to F#. This is not the case.

Java was a small and simple language with a lot of innovations but some annoying problems. A big part of Scala appeal was that it was a better Java with more features.

C# was also made to be a better Java. It fixed some of the flaws in the original Java, e.g. auto boxing of integers, generics and lambdas. C# is a great but also very big language. F# is more like a leaner version of C#, with less features.


Collection Libraries


Scala has made a big effort to make a full set of immutable and mutable Scala collections and make different Java collections look like they are native Scala collections.

I tried Scala in 2007 it had generic and could use Java generic, but either you were programming in Java or in Scala. This took a long time to get this right and cost was that the standard library code was very complicated. This is not really a problem for the user who won't see this.

Generally, F# is using a few collections: Arrays, lists, seq, set and map.
It is a bit messy to bridge the OCaml and the C# heritage, especially map / dictionaries are clumsy.


Monads


A monad is an important part of functional programming. It is a general principle to express a sequence of operations and work on a lot of different data types:
List, Seq, Future, Option.

Scala monadic for comprehension



Scala's version is syntactic sugar over flatMap(). It is more flexible, it can mix two types of monads say List and Option. Scala's monad will also return the same type as the input type.

F# monadic for comprehension



F#'s version of the monad computation is called computational expression. It has more features than Scala's.



Classes


Classes are considered an anti-pattern by functional programming purist. Some problems with classes are:

  • A class maintains state
  • A class creates a custom language instead of reuse of operations
  • Inheritance is crating tight coupling

Scala has a very sophisticated type and class system and classes are central part of Scala.

F# has support for classes, but it is billing itself as object programming not object-oriented programming. It is made to use classes defined in C#, but will often define objects with methods without a full class definition.

I like that F# is exploring more lightweight alternatives but classes are easy to create and feels natural to use.


Type Classes, IoC, DI, Type Providers


Type class is a powerful abstraction that can make a third party class implement an interface. It plays an important role in Scala and are implemented with helper classes created by implicit.

Inversion of control and dependency injection, are first class in Scala with implicit. This is an advanced but very useful feature of Scala.

Scala has developed these ideas to the point where you can do logic style programming with implicit. A lot of the more sophisticated category theory like programming is based on this.


You can do inversion of control and dependency injection in F# using libraries.

F# has type providers that on the fly generated typed access to a lot of different data sources, e.g. a table on a webpage.


Design Decisions


F# is white space indentation-based language. Scala is a curly bracket language.

F# is a lightweight language with strong compose-ability.

Scala has sophisticated type system, including type classes, this unifies a lot of different classes and facilitating reuse.

F# program feel a little more like a loose collection of definitions while Scala program feels more like a carefully packaged system.


Conclusion


F# and Scala have a lot in common. To a large extent you would still chose F# or Scala based on your platform choice.

Both languages are very well suited for building back end programs that can interact with a universe of libraries written in C# or Java.

Scala has more momentum and a better niche. It is still having status as a better Java. Even after Java added some of the best constructs from Scala. Spark has made Scala a corner stone of data engineering.

F# is more lightweight than Scala. This makes it great for data exploration and great for building small scripts. It still remains to be seen how well supported Spark is going to be for .NET.

From a language evolution perspective, the object functional hybrid has been very successful. F# and Scala's different emphasis has produced different language from similar goals. I am very happy that we now can compare their design decisions on merit not just compare .NET and Java ecosystems.

This article is an elaboration on my last blog post Typed Functional Languages 2019.
Disclaimer I have been a happy Scala user for years, and only occasionally use F#.

Tuesday, September 10, 2019

Typed Functional Languages 2019

This post is a brief status of the state of typed functional languages in late 2019.

Typed functional languages like Clean, Haskell and OCaml were developed within academia in the 1990s. Around 2010, languages like F# and Scala were gaining some acceptance in industry. Today there are many great typed functional languages, several used in industry. I will give a brief side by side introduction to the following languages:

  • F#
  • F*
  • Haskell
  • OCaml
  • Rust
  • Scala
  • TypeStript
Concepts from typed functional languages have also spread into object oriented languages like C++, C# and Java. The distinction between OOP and typed functional is fluid, so that list might seem a little arbitrary.

These languages are best of breed so the point of this article is not to compare them by merits, but to explore what language to use for what purpose. Follow up post covers F# vs Scala.


F# (F Sharp)


F Sharp


F# is a mature, open source, cross-platform, functional-first programming language. 
  • Core: Strict, strong, inferred, hybrid
  • Popularity: Some use in industry and backed by Microsoft
  • Complexity: Easy to learn, but part of a big ecosystem
  • Maturity: It is 14 years old and part of the .NET, so quite mature
  • Tooling: Very good
  • Cross platform: with Mono and .NET Core and JavaScript
  • IDE: Visual Studio, VS Code

Strengths

Simple, open source, cross platform with good integration with the whole .NET universe.
Well suited for backend programming, Azure, web-serving and finance.
Type providers give easy typed access to a lot of different data sources.

Issues

IDE, GUI programming and LINQ is not as well developed as for C#.


F* (F Star)


F Star

F* is a general-purpose functional programming language with effects aimed at program verification.

  • Core: Strict, dependently typed, tactical theorem prover, constraint solver, refinement type, algebraic effect tracking
  • Popularity: Research language with very few users
  • Complexity: Quite complex
  • Maturity: Several researchers are working on it, but it is not used a lot
  • Tooling: Not super polished, but build on top of good tooling in OCaml and F#
  • Cross platform: OCaml, F#, C, WASM and ASM
  • IDE: Support for Emacs

Strengths

F* has implemented a lot of powerful and interesting ideas that you can try and actually use. It is a very well developed dependently typed language.
Good for validating highly sensitive security programs, encryption protocols.

Issues

There is little adaptation and it has not stood the test of time yet.


Haskell


Haskell

Haskell is an advanced purely-functional programming language.
  • Core: Lazy, pure, effect tracking using effect monads
  • Popularity: Prestigious research language with some industry adoption. Number 19 on Red Monk June 2019 list
  • Complexity: Very complex language
  • Maturity: It has been around for 30 years, used in industry, used for research
  • Tooling: New build tool Stack is quite nice
  • Cross platform: Runs on OS X, Linux and Windows
  • IDE: Several decent plugins for: VSCode, emacs, Spacemacs, SpaceVim and IntelliJ

Strengths

Very influential research language, test bed for a lot of language research and development.
It has been optimized for years and has some use in industry.
Type classes are built into the language so you can reuse code very broadly.
Aesthetically pleasing if you love math or category theory.

Issues

It is a very complex language and tracking effect in non pure computations is quite hard.
It has some use in industry, but is still very much a research language.


OCaml



 OCaml is a strictly evaluated functional language with some imperative features.
  • Core: Strict, strong, inferred, hybrid
  • Popularity: Used as teaching language and by a few big companies
  • Complexity: It is a simple language to learn
  • Maturity: It has been around for 20 years and is used in industry so quite mature
  • Tooling: Recently it got a good build tool and package manager
  • Cross platform: Runs on a lot of different operating system, hardware
  • IDE: Language server with good integration with Eclipse, VS Code, Emacs and Vim

Strengths

Great REBL, very fast compiler, makes it suited for tooling. Facebook using it for web tooling.
Popular in theorem provers.

Issues

Concurrency is not great.


Rust


Rust
TM Mozilla

Rust is a multi-paradigm system programming language focused on safety, especially safe concurrency.
  • Core: Inferred, linear type, nominal, static, strict, strong, build around concurrency
  • Popularity: Quite popular and raising. No 21 on Red Monk June 2019 list
  • Complexity: Somewhat complex language
  • Maturity: Pretty new language, but used in Firefox and by AWS Firecracker
  • Tooling: Excellent build tool and package manager
  • Cross platform: Work on many different OSs
  • IDE: Good VS Code support

Strengths

Rust is a combination of ideas from OCaml, Haskell, C++, linear types and low level imperative control. It is very fast and well suited for system programming and secure programming. There is no garbage collector and no runtime, this makes Rust great for writing libraries and WebAssembly. Rust has started to make inroads in cloud infrastructure.

Issues

Getting rid of the garbage collector makes the language harder to understand and program in.
It is a pretty new language, still developing, and there are fewer libraries.


Scala


Scala

Scala combines object-oriented and functional programming in one concise, high-level language.

  • Core: Strict and lazy, nominal and structural, hybrid, implicits for IoC
  • Popularity: Very popular. No 13 on Red Monk June 2019 list. Spark is written in Scala
  • Complexity: It is a quite complex language, but it is easy to get started with
  • Maturity: Very stable. Run on JVM, well integrated with JVM ecosystem
  • Tooling: Great build tool and package managers
  • Cross platform: JVM and JS. Also early work on native / LLVM version
  • IDE: IntelliJ, VS Code, Eclipse

Strengths

Back-end programming, data engineering, web serving.
It is a great all around language. A lot of work has gone into creating language constructs that makes Scala work well with Java libraries. In Scala 2.0 this was not the case.
Spark is a cornerstone in data engineering.

Issues

There is quite a lot of complexity: Implicits, macros,  type classes / ad hoc polymorphism is possible but it takes some work.
Not super easy to set up a small project.
GUI programming support is not that great.


TypeStript


TypeScript

TypeScript brings you optional static type-checking along with the latest ECMAScript features.
  • Core: Gradually typed, structural, many new sophisticated type constructs, data language
  • Popularity: Very popular. No 10 on Red Monk June 2019 list
  • Complexity: Pretty complex
  • Maturity: A lot of money has gone into JavaScript, it is improving but it still feels wonky
  • Tooling: NPM. There are a lot of tools in the Nodes ecosystem, too many
  • Cross platform: Runs in every browser and on Node.js
  • IDE: Amazing support in VS Code

Strengths

Typescript makes big JavaScript codebases a lot more robust.
It is really easy to process semi structured data in json.
Starting to see some use of TS in machine learning e.g. with TensorFlow.js.

Issues

The JavaScript modules seem simple like in Python or Java, but there are many different module systems and it is pretty complicated. There are a lot of NPM packages but it still feels less mature. Getting setup with a small project with unit tests is more work than it should be.
Concurrency: Async await dramatically simplified call back style of programming, but still not great.


Golden Age Programming Languages


For many years I was puzzled about why language evolution seems to favor bloated and hacky development, while ignoring more principled computer science ideas. Twenty years ago I got very excited to read about these new functional languages with strong types. Unfortunately they were only popular in academia.

We are finally living in the golden age of programming languages. It just took some time. Development is moving quickly now and not slowing down.

Apologies in advance for omissions, outdated information and other mistakes.

Thursday, April 25, 2019

Benefits of Different Python Distributions on Mac

There are at least 5 popular ways to install Python on OS X / Mac.

  • OS X default Python installation, currently Python 2.7.10
  • Use brew install python
  • Use brew install pyenv
  • Anaconda
  • Python pkg installer from python.org

I have used all of these distributions. They are all high quality and easy to install, but you run into conflicts later. You think that you are installing a library into one Python distribution but it get installed into another distribution so you cannot use it. This causes many frustrating errors.

Every time I install a Mac I have to decide what is the best Python distribution for my use case and there is no simple choice. It has been hard to find good documentation on the trade offs between the Python distributions. I have a compiled a short list of benefits and issues and where I think that the different distribution make sense.


OS X Default Python Installation


  • You don't have to install anything
  • If you only want to have one Python distribution this will be the one
  • It is a pretty recent version of Python 2.7 currently 2.7.10

Issue

  • Not supporting Python 3 which is now in common use

If you only are doing light Python 2 scripting this is probably the easiest choice.


brew install python


  • Brew is the de facto package manager on OS X so most software is installed with brew
  • Very up to date versions of Python 2 and Python 3
  • Works well when you want to install many Python libraries
  • Python 3 is the default, but brew install python@2 will install Python 2
  • It takes precedence over the OS X default Python by being in earlier on PATH env
  • Brew will probably install Python as a requirement for other packages so you get it whether you want it or not

Good for more demanding programming and installing libraries.


brew install pyenv 


  • pyenv is a tool to have different versions of Python to chose from
  • It has no dependencies on either Python 2 or 3 but manipulate PATH env
  • It can co exists with brew install python 
  • It can also work with virtual environments

Issues

  • You have to install other libraries say gzip before you can install this
  • Python is compiled from scratch and you easily run into compile problems

Good if you are a serious programmer who need many different versions of Python possibly with conflicting versions of libraries.


Use Anaconda


  • Anaconda installs different version of Python with high quality curated packages specialized for data science libraries
  • It can be hard to get data science libraries working with manual installs
  • It is a whole ecosystem of software 
  • Includes good Python GUI called Spyder
  • Great support for Jupyter notebook
  • Has good built in support for Python's virtual environments 

Issue

  • It is a pretty heavy distribution taking up around 3GB

I usually need the data science libraries so I install Anaconda but also end up with the brew version of Python.


Python pkg Installer From python.org


  • It is the official Python distribution
  • You can always get the newest version of Python
  • Self contained installer

It is an easy way to get the last versions of Python installed.