Recent Thoughts on Neural Nets:

After attending Stanford’s Theoretical Neuroscience course yesterday I started wondering about more biologically derived methods of training neural nets, specifically incorporate  the neurogenesis , simplicity of early classifications, randomness/noise, mixtures of supervised and unsupervised learning and linear pre-processing with nonlinear processing and classification, & finally varying where the “raw” data is fed in order to obtain more general network architecture.

This has been inspired by a few sources. First was a paper Andrés Gomez said they were working through in his BCI class showing that there are roughly 20 different types of ganglion cells in the visual system some of which are doing tasks like photosensitivity, which initially due to my lack of knowledge of the visual system seemed unimportant, but this turned out to be quite a discovery. It means that simpler tasks *pre-processing* had evolutionary “reason” to be done later in the visual system. This lead me to the intuition that this may be a nice feature detection hack to use in a neural net. By taking some or possibly all of your data and feeding it into the hidden layers, there might be unexpected improvement. *highly speculative*

I had another idea of training a network with a regression problem of how salient the image is to the task versus noise filled images. It would start by estimating a real number in [0,1], with 1 being noiseless, (possibly a task for denoising autoencoders, not sure) then after it has learned what type of images it’s supposed to be learning add on an additional layer(s) and then do you classification task. Initially I thought this might be all you need to do in order for it to learn good from bad input, but that’s super silly and my non-machine learning friend immediately pointed out the flaw, the network once done with MNIST training, wouldn’t in any serious sense remember the first task of telling if a photo was salient or not. So my solution, again all of this post is just speculation, would be to add a bunch of the noisy images into the final training and augment the target vector so that the 11’th element purely corresponds to if the image is salient or not. (Would there be better performance if you train on non-digit salient images? like small pictures of random images? or better if you just left your salience sensor tuned for digits?, open questions)

The incorporating neurogenesis and mixtures of (un)supervised and (dis)continuous methods I’m much less speculative of and currently feel I have good reason to work on developing networks which use these. Sadly that means I’m currently going to keep the details to myself. If you’re reading this and are interested in collaborating, please email me and we’ll talk more.

Anyways, hopefully this summer will lead to these open questions about new neural net structure will be answered. If they have already and my searches were faulty in returning salient research that has been done, please link me up! (because barking up the wrong tree is useless only if someone else already tried barking there)