After I implemented a dopamin based learning reinforcement learning (RL) algorithm with nest I saw some problems to use the software with machine learning (ML) algorithms. In my thesis I used the simulator software NEST. It is one of the most advanced neural simulatros and offers a lot of neuron models and has a welcoming community. Machine learning algorithms need rapid update cycles. Nest is designed to run big simulations for a long time and allows scaling to use supercomputers. I also used computing resoruces of the Leibniz Rechenzentrum, however I ran multiple jobs in parallel.

The tools people use are essential. Any good library or standard encodes design work eficciently and saves many hours. The sucess story of computing is the history of thounds of iterations improving designs. I believe that software should not expect users to be experts on it to use it. The mailing list often saw people with similar question to the problem I encountered in using it with rapid cycles. Many of the researches coming form neuroscience are not experienced developers. I believe that in the future more people will look into the intersection of ML and neuroscience. I felt compelled to act based on my knowledge. NEST is open source so I joined the fortnightly meetings and discussed my idea. I wrote a proposal, discussed it again. Then I submitted a pull request. Unfortuantely it is not a solution covering all use-cases and was at this point closed. I understand the decision from the perspective of neuroscientist but yet this is unfortuante for machine learning.

At this point I think it is reasonable to halt my advances in this area. The hurdle is too high to make NEST fit for machine learning do it on the side.

To research the application of building homegrown neuromorphic computers I started to put the SNN-RL framework into a custom C++ back-end with multiprocessing. Thus, skipping the inefficiencies of nest for this use case and enabling real-time processing. Once it has been show to work on von-Neumann computers I will showcase it on a real pole-balancing apparatus bringing the algorithm to the real world. The appparatus is almost constructed. This algorithm will be later extended to run on FPGAs allowing per-neuron multiprocessing/full neuromorphic computing. May approach will not be revolutionary but it proof that reinforcement learning with SNN can solve real world problem on custom hardware.

The new library will be open source and found here. The URL might break, as I may change the name. My wish is to work on this research in my free time. Altough my day-job includes machine learning this is yet too experimental to be applicable in industry. One potential use-case is real-time control for embedded devices. Since I now have my first experience with working prolonged 40h+ weeks I need me free time to keep my balance. On a weekend I am happy to do some other things for a while than thinking again about code and staring into screens. Updates on this topic will follow.

When Python is too limited or too slow you might want to write some code in C++. You can use the Python API to use it as an extension module.

Xcode comes with the Python2.7, which is deprecated. Below is a guide how to link python3. Note that in order to install a python module you need to run a small python script on your C++ code, so you can only use xcode for some part of the development.

I have python installed via brew but other sources should also work, when you adjust the paths.

Call python3-config --ldflags to obtain the path of -L/usr/local/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/lib/python3.9/config-3.9-darwin -ldl -framework CoreFoundation

Open that file and drag libpython3.9.dylib to the framworks section. Confirm the dialog.

Screenshot showing xcode project with folder Frameworks

Open the Build Settings and search for "Header Search Paths". Fill in /usr/local/Cellar/python@3.9/3.9.1_3/Frameworks/Python.framework/Versions/3.9/include/python3.9

The path can be obtained by calling

python3-config --cflags

In your code include

#include <Python.h>

The compiler flags were different in the setup than in Xcode. Code compiled in Xcode but by calling the setup compilation failed. Set the compile flags in Xcode to "default" to have the same behavior.

Having attended a workshop on Spiking Neural Networks (SNN) recently I noticed that there is no established set of knowledge about encoding strategies.

In SNN usually there is a continuous analog signal. In digital simulations this signal is discretized. For processing in SNN this signal must be encoded. There are three major strategies to do so.

Rate Code

When the information is transmitted via the firing rate, it must happen in the range between no firing and the maximum firing rate. A negative firing rate is not possible. When utilizing rate code and the values to be encoded can be positive and negative, there are two ways to encode this information. Two neurons per input dimension are needed to either set the ’positive’ or’ negative’ neuron to fire, depending on the sign of the encoded value. Another option is to add a bias value in the transcoding process so that negative values can be encoded in the range below this base level. The translation of an analog signal to a spiking firing rate happens via spike generators. Generators can generate sinusoidal spikes or Poisson distributed. Sinusoidal generators may create phase-locked signal propagation, while Poisson distributed spikes prevent this by evening out repeated correlations.

Place Cell Encoding

Also called labeled line in the cochlea. This is what we use in one-hot encoded vectors in ANNs. In this case the whole sensor space is covered in a one-to-one correspondence. Receptive fields are topological connected regions in the sensory space. Many sensors are connected to a cell.

There are parallels in convolutional neural networks. Each neuron in a CNN layer has a receptive field, with the size of the kernel. CNN layers are usually combined with a pooling layer. Pooling layer report the average or maximum activity to perform a size reduction. These resemble more the idea of a receptive field.

Time to First Spike

This is similar to the rate code, as the inverse of the firing rate. This encoding strategy is quite successful.

t_0 \approx f^{-1}

The benefit is that you have the earlier and less latency. Downside is that it is not robust as a single spike event encodes all the information. As soon as a spike is recorded in the output layer the processing can be stopped.

Based on my work on my master thesis I produced a video giving some overview and presenting some discoveries on spiking neural networks. Familiarity with the topic of Artificial Neural Networks is expected.

My last video production was years ago. I noticed that when my workflows were not as productive as I expected it. I exported the keynote presentation and added a voiceover in iMovie. Using only static images is a lot easier to match the VO with the images than timing videos. The video is based on my thesis defense. Therefore, some previous knowledge might be expected. E.g., I missed mentioning why backpropagation in SNN is not possible (there is no gradient of the activation function).

Link to Video

Taking a look at how one can work I hope to foster discussions about healthy and productive work in this article. As a Geistarbeiter (German. roughly translates to „mind worker“) your job is to manage information. Geistarbeiter rely on media to store and retrieve knowledge using tools like notes, books, and computers. They use tools to extend their mind. Which tools can we use and which will us await?

Picture of my workplace I found a mixture of different media the most fruitful for office-related geistarbeit. For many people - writers are famous for this - the tools of choice are a piece of paper and a pen. I feel limited with a pen and paper in two ways: It limits to sitting in a chair and erasing is not easy. When erasing is not easy, you are limited to writing sentences and graphics of simple shape, thus limiting complexity. I found that the saying „out of sight, out of mind“ is true to its heart. Therefore I keep current thoughts on whiteboards on walls. I can edit them to include new insights. Once a model is complete I create a digital copy for my archives. Some of the resulting graphics can be found on this blog.

My ideal workplace arranges information in spaces. The discovery of place cells in the human brain underscores the importance of the concept of space to the (human) mind. Place cells burst spikes when animals are at a certain region in an environment. We are hard-wired in neuronal structures to understand space.

A tool of huge potential is augmented reality, however, this technology yet lacks the resolutions that allow comfortable reading and long wearable times. The headsets are quite heavy and become uncomfortable after a while. Next, the Valve Index® will once again push the field forward. For mobile geistarbeit I am curious how augmented reality might change work.

Inspired by the t3n magazine office showcase here I present a graphic to show my current office set-up. Scetch of my workplace I try to use the concept of spatial arranged information. You can arrange different windows on many screens or just use a huge one. Using the two-dimensional space is useful in many situations but a lot of information is still stored hidden in files. Another issue is that working on some tasks requires a lot of space, so you take a window and arrange it in full width on your main screen. This hides all other windows.

I use an Amazon Echo, although I am pretty aware of privacy. At my office, I usually don’t speak with other people, or when I am on the phone I mute the microphone. It can be a nice tool to query the internet for some information.