We recently started work on an interesting but deceptively simple project. It’s all about a device that tracks and helps improve the user’s posture. It has built-in sensors that constantly feed data into it’s embedded software. In turn, the software needs to determine if the posture is OK or not, based on the sensor readings. In the latter case, the software also needs to determine what exactly is wrong with the user’s posture.
Smooth as Math
In typical Pareto fashion, all went smoothly for about 80% of the project. Using simple algorithms, we were able to determine the majority of bad postures.
Such an algorithm looks something like:
If the values received from sensors 3 and 6 are much lower than the average of the others (that is, below a given threshold), then we certainly have [bad posture type 4].
However, basic math and (mere human) logic proved not to be enough. There was one type of bad posture that was hard to detect given the sensor readings. No matter what we tried, about half of the results were OK while the other half were either false positives or false negatives.
But wait a minute, I know a machine that can (learn how to) solve this
After a few fails, we no longer benefited from a device that noted our increasingly bad posture, so we changed our approach. Instead of bashing our brains against the large amount of data, we decided to create a new, more specialized brain … to bash. And bash we did.
Enter Neural Networks and Neuroph
First off, had to test the viability of our new approach. As such, we needed to be able to quickly create, train and test various neural network types and configurations.
We chose Neuroph.
Neuroph is lightweight Java neural network framework to develop common neural network architectures. It contains well designed, open source Java library with small number of basic classes which correspond to basic NN concepts. Also has nice GUI neural network editor to quickly create Java neural network components. It has been released as open source under the Apache 2.0 license, and it’s FREE for you to use it.
After a few hacks like altering command line switches in neurophstudio/etc/neurophstudio.conf (WAT?), we managed to get NeurophStudio working (that is: “-J-Xms768m -J-Xmx1024m” for you techies out there).
Beyond this, we just followed well known recipes and it was pretty much smooth sailing.
Long story short, we:
1. Collected data from colleagues
2. Normalized the data (so that the food for the artificial brain’s thoughts would be more digestible)
3. Created 2 subsets of data: one for training and one for testing
4. Implemented a few neural network types (Adaline, Perceptron and MultiLayerPerceptron) with various neural configurations
5. Trained each neural network and tested it using the data sets
6. Chose a winner based on the test results – a particular flavour of MultiLayerPerceptron (in our case)
7. Fine tuned the winner’s configuration for best results (minimizing the errors down to a specific threshold)
Was it worth the effort?
First off, in order to complete step 1. (above), I got the chance to walk up to unsuspecting colleagues and say: “Hi, my name is Vlad and I need to use your body … for science.”. This is reward enough for me, personally.
As per the project, the goal was to reduce the large error rate.
And so we did. We started with a very large error rate of about 50% (for the given problematic case) and reduced it to less than 1%.
A huge improvement, a great result and better posture for the team!