Introducing tensforflow-ruby

Since I recently left Zerista, I’ve had time some time to take a step back and think about what type of problems I want to work on next. And one of the things I haven’t had a chance to dive into is deep learning. So it was time to change that.

There is no substitute for doing something, so after completing the Coursera deep learning specialization it was time to go build a neural network. I toyed with the idea of building my own implementation from scratch using Python, but I couldn’t get excited about writing throwaway code (and that’s pretty much what the Coursera classes did anyway). I also thought about picking some interesting problem and seeing if I could apply deep learning to it. That sounded like fun but was too high a level since I’d would build on some library like TensorFlow and treat it as a black box.  But what about TensorFlow itself and understanding how it works?  What would be involved in creating a language binding for Ruby that allowed you to create and train TensorFlow models in Ruby?

And thus is born the tensorflow-ruby gem. Why Ruby? Because I like it and know it well. To be clear – I’m under no illusion that people are going to start using Ruby for developing TensorFlow models. But that’s not the point. The point is for me to learn TensforFlow and if the code I write turns out to be useful for other people that’s great.

After having worked on this on and off for the last few weeks, I’m pretty happy with my choice. It turns out TensorFlow is about 1.1 million line of C++ code and 750,000 line of Python. A portion of the C++ code is wrapped by a CAPI. The ruby bindings now support much of that API. As the TensorFlow documents describes, providing TensorFlow functionality in a programming language can be broken down into these broad categories:

  • Run a predefined graph: Given a GraphDef (or MetaGraphDef) protocol message, be able to create a session, run queries, and get tensor results. Done.
  • Graph construction: At least one function per defined TensorFlow op that adds an operation to the graph. Ideally these functions would be automatically generated so they stay in sync as the op definitions are modified. Done.
  • Gradients (AKA automatic differentiation): Given a graph and a list of input and output operations, add operations to the graph that compute the partial derivatives (gradients) of the inputs with respect to the outputs. Done. Allows for customization of the gradient function for a particular operation in the graph. Not Done.
  • Functions: Define a subgraph that may be called in multiple places in the main GraphDef. Defines a FunctionDef in the FunctionDefLibrary included in a GraphDef. Simple Solution Done
 
What isn’t done:
  • Control Flow: Construct “If” and “While” with user-specified subgraphs. Ideally these work with gradients (see above).
  • Neural Network library: A number of components that together support the creation of neural network models and training them (possibly in a distributed setting). While it would be convenient to have this available in other languages, there are currently no plans to support this in languages other than Python. These libraries are typically wrappers over the features described above.

Its that last step that’s really interesting. From what I can see, all training code is in Python and that code needs to be ported to Ruby. And understanding that code is really the code to understanding deep learning. And I’ve finally put in enough foundation that its time to get started on it.

Leave a Reply

Your email address will not be published. Required fields are marked *

Top