Title: Challenges for deep neural network models of visual cognition: from incorporating biological constraints to predicting correlational and causal experimental outcomes.
Organizers: Kamila Jóźwik (email@example.com) and Kohitij Kar (firstname.lastname@example.org)
Deep convolutional neural networks (DCNNs) have revolutionized the modeling of visual systems. However, despite being good starting points, they do not fully explain brain representations and are not like a brain in many ways. DCNNs differ from the brain with respect to their anatomy, task optimization, learning rules etc. For instance, most DCNNs lack recurrent connections, are supervised learners, and do not have brain-like topography. Therefore, the next generation of models could benefit by incorporating the critical brain-inspired constraints. On the other hand, the model predictions need to be experimentally validated. A common trend is that experimental data collection and modeling are executed somewhat independently, resulting in very little model falsification, and thus — no measurable progress. Our main proposal is that the synergy and collaboration between computational modeling and experiments is critical to achieve success. The approach needs to be a closed loop; models predict experimental outcomes — experimental outcomes falsify models — better models with further experimentally derived constraints are built. We aim to list and discuss the challenges we face and the path forward in establishing this closed loop to solve visual cognition.
Why is it of interest?
We are finally at a stage in visual neuroscience, where computational models have progressed beyond toy-data descriptors to real predictive models simulating the one running in our brains. We need to capitalize on this success and start asking the hard questions demanding more stringent empirical tests to falsify current models and build better ones. This workshop sets the stage for both senior and early-career researchers to engage in a very objective discussion on what should be the next steps. The diverse group of invited speakers with varied scientific approaches will ensure challenging and interesting discussions.
Targeted participants: Neuroscientists working with different animal/computational vision models.
Date/time: coming up….
I recently came across the website of this company Spritz. They claim to have found a way to scientifically fasten your reading speed.
Here’s how it works. First, they identify the Optimal recognition point (ORP) of a word. Then they highlight that letter with red color (rest of the letters being black). As a reader you fixate around the ORP and they show each word in a text at a certain speed. No macro eye movements are required. So that saves you from the fatigue. And if you keep at it for a while, your reading speed evidently increases. You can go from 130 words/minute to 250 words/minute pretty quickly. In fact, they also pause and introduce gaps at punctuation. I have tried it and it seems to work. It is best for reading books (without images) and news articles. Here’s a snapshot of the app. (iTunes Link; free download, but you have to pay 4$ to really reap its benefits).
Screen shot from the app
In the screen-shot, you can see the ORP in red, the main text (blurred, in the bottom). Each word is shown one by one.
I am delighted to share the news that Melanie Arroyave (Bayonne High School), who worked with me for a couple of months last summer (on concurrent transcranial electrical stimulation and fMRI) from the Partner in Science Program ( http://lsc.org/for-educators/programs-at-the-center/partners-in-science/ ), won the gold medal at the 57th annual Hudson County Science Fair. She presented the work that she did with me here at Rutgers. Congrats Melanie!
Click here for the link to the news article on the Jersey Journal.
She is super excited, and wants to come back for this summer too and work in our lab (Krekelberg Lab)! It makes me happy because she is one of the youngest minds we have stimulated and that too without putting any electrodes on it!
The primary purpose of this article is to make my intentions pretty clear. A bit of introspection makes me realize I have to justify maintaining a scientific blog. So I am pouring out my thought process here.
What’s here is not novel science
Any novel unpublished material lying around in my desk is not gonna make it here. The procedure for that is to pursue it further and publish it with sufficient peer review.
So what will be the content on this blog?
When I started learning about neuroscience, I made a few notable observations.
First, neuroscience keeps changing everyday. So it is important to keep yourself updated (at least in the genre that interests you). In this blog I will be posting and elucidating (sometimes criticizing) new materials that interest me.
Second, the quality of text books out there on Neuroscience is pathetic. Now, there are reasons behind it. Besides being so much unexplored, neuroscience is also highly interdisciplinary. So to focus on any particular aspect of it makes you leave out multiple other dimensions. So most books try and take a mid-way stance. This makes the books lucrative for the beginners but kills it for the advanced graduate students (who then find solace in the latest journal articles, which are off-course more difficult to comprehend). So as I keep learning new methods and techniques, I feel it is my duty to explain them in lucid terms to the new students so that they can grasp it faster and better. I use MATLAB simulations whenever possible to explain concepts to myself.There will be a lot of that.
Third, because of space constraints and sometimes simply tradition, many concepts in the journal articles are explained with suboptimal clarity. Here, I will try and elaborate on those aspects (specifically if it is my article; not so much for others).
When I was 15, I wanted to become a film director. Now I am a neuroscientist and love what I do. So some directions are decided on the go. So the rest of the blog themes will get updated as it evolves.