Author Archives: kohitij

Upcoming workshop @COSYNE 2019

Title: Challenges for deep neural network models of visual cognition: from incorporating biological constraints to predicting correlational and causal experimental outcomes.

Organizers: Kamila Jóźwik (kmjozwik@mit.edu) and Kohitij Kar (kohitij@mit.edu)

cosyne2019_workshop

 

Workshop goals:

Deep convolutional neural networks (DCNNs) have revolutionized the modeling of visual systems. However, despite being good starting points, they do not fully explain brain representations and are not like a brain in many ways. DCNNs differ from the brain with respect to their anatomy, task optimization, learning rules etc. For instance, most DCNNs lack recurrent connections, are supervised learners, and do not have brain-like topography. Therefore, the next generation of models could benefit by incorporating the critical brain-inspired constraints. On the other hand, the model predictions need to be experimentally validated. A common trend is that experimental data collection and modeling are executed somewhat independently, resulting in very little model falsification, and thus — no measurable progress. Our main proposal is that the synergy and collaboration between computational modeling and experiments is critical to achieve success. The approach needs to be a closed loop; models predict experimental outcomes — experimental outcomes falsify models — better models with further experimentally derived constraints are built. We aim to list and discuss the challenges we face and the path forward in establishing this closed loop to solve visual cognition.

Why is it of interest?

We are finally at a stage in visual neuroscience, where computational models have progressed beyond toy-data descriptors to real predictive models simulating the one running in our brains. We need to capitalize on this success and start asking the hard questions demanding more stringent empirical tests to falsify current models and build better ones. This workshop sets the stage for both senior and early-career researchers to engage in a very objective discussion on what should be the next steps. The diverse group of invited speakers with varied scientific approaches will ensure challenging and interesting discussions.

Targeted participants: Neuroscientists working with different animal/computational vision models.

Date, time and other details click here

Tool to read faster

I recently came across the website of this company Spritz. They claim to have found a way to scientifically fasten your reading speed.

Here’s how it works. First, they identify the Optimal recognition point (ORP) of a word. Then they highlight that letter with red color (rest of the letters being black). As a reader you fixate around the ORP and they show each word in a text at a certain speed. No macro eye movements are required. So that saves you from the fatigue. And if you keep at it for a while, your reading speed evidently increases. You can go from 130 words/minute to 250 words/minute pretty quickly. In fact, they also pause and introduce gaps at punctuation. I have tried it and it seems to work. It is best for reading books (without images) and news articles. Here’s a snapshot of the app. (iTunes Link; free download, but you have to pay 4$ to really reap its benefits).

litz

Screen shot from the app

In the screen-shot, you can see the ORP in red, the main text (blurred, in the bottom). Each word is shown one by one.

Melanie wins Gold medal at the 57th annual Hudson County Science Fair

-de3ffe02cc27f71d

I am delighted to share the news that Melanie Arroyave (Bayonne High School), who worked with me for a couple of months last summer (on concurrent transcranial electrical stimulation and fMRI) from the Partner in Science Program ( http://lsc.org/for-educators/programs-at-the-center/partners-in-science/ ), won the gold medal at the 57th annual Hudson County Science Fair. She presented the work that she did with me here at Rutgers. Congrats Melanie!

Click here for the link to the news article on the Jersey Journal.

She is super excited, and wants to come back for this summer too and work in our lab (Krekelberg Lab)! It makes me happy because she is one of the youngest minds we have stimulated and that too without putting any electrodes on it!

Defending my Thesis (video) on Dec 16, 2014

Finally the day has come and gone ..

Strangely, I feel more relieved than happy. Most likely because i am back next day where i have always belonged (the lab space), doing pretty much the same thing and enjoying the ability to think freely again.. But one task off the to-do list! I am attaching the video of my defense talk. This serves the following purposes,

1. My parents can now watch it from India.
2. If you are a dear friend and missed it, here’s your chance to watch it.
3. Future recruiters, who might wanna check me out can do so, without me knowing or having to travel to their home ground to defend myself.

Click on pic to watch the video ...

Click on pic to watch the video …

 

Partners in Science – 2014

This summer I got to interact with and mentor two very nice and smart high school students for the partners in science program. They are not even juniors but they really worked hard and grasped most of the key concepts of my PhD thesis. We worked together to analyse some fMRI data on a very recent (unpublished) experiment. I am putting their presentation video here. Mentoring them has been one great learning opportunity for me too. It has been ultimately a very satisfying experience. I would very much encourage any grad student to try and do this if possible. Not only it is a rewarding experience for us, it is also a great way to make kids interested in real science. Occasionally, I was also reminded how simple questions asked by the students can make you think harder than ever and make you realize that basics are so important.

Click on the picture to see the video

Click on picture to see the video

 

Computational Neuroscience in Vision @ CSHL: The Epilogue

As time went on, it became tougher to post here. But the course went on in its full force and left us wanting for more. I can say without any doubt that this has been the single most influential 2 weeks of my scientific career so far. Just getting to know so many like minded people, chilling out with them, discussing ideas and listening and learning from their opinions has definitely made me a more complete science enthusiast (if there is something like that). A few crucial lessons i learnt (on the bigger picture) from the course are:

1. Decide if you wanna catch the fly or count the hairs on its legs (#Pascal).
2. We all suffer from impostor syndrome and hence we shouldn’t worry about it (#Weiji)
3. You can perhaps decode which face i showed u from looking at V1 voxels and probably not from FFA voxels, but that doesn’t make V1 a face area or FFA not a face area. So what does decoding really help at deciphering? (#Tony)
3. 10-20% of all grad students will probably make it to being PIs. You are in that group. So just keep working as hard and you’ll get there (#Geoff)
4. Developing sound intuitions about the use of various computational methods are what we can really take home from these meetings (#self)
5. Knowing the right person at the right time is not luck but a result of multiple factors (having real interest in science, publishing papers, attending these kind of summer courses, being genuinely interested in learning from other’s work etc) that maximizes the likelihood of the event!

Computational Neuroscience in Vision @ CSHL: Day 4

Day 4 started off with Geoff’s 2nd lecture on fMRI. After briefly recapitulating the properties of a linear system, he spoke about the use of general linear models in fMRI analysis. He mentioned about ‘m-sequences’. At the end of his talk, Geoff told us that he thinks that vision neuroscientists have been completely focused on peripheral vision and that should change. The afternoon session was taught by Stephanie. This is the first time when the weaponry (nerf guns) were officially employed 🙂 Infact, Stephanie herself asked us to shoot if we didn’t understand any concept or if she went too fast. Although since Jonathan invited her, she gave us the option to shoot at Jonathan instead as well! The pic below summarizes it all….

Nerf wrecking lessons

Nerf wrecking lessons

Stephanie talked about information theory. In the break time we played “the spike lottery” (I will explain it some other day). 

Computational Neuroscience in Vision @ CSHL: Day 3

Another great day as it started with bacon and sausages in breakfast.

Gregg started off the day with a very nice introduction to the white noise analysis of V1 data. The best part of the talk to me was the geometric interpretation of STA. That has been inserted into my “take home” list of ideas or interpretations from the course. Also the way we can estimate the nonlinearity in the LNP model was really explained well by Gregg. He also went over contrast invariance of V1 complex cells and the reason to start looking into STC etc beyond STA.

In the afternoon session, Geoff gave us a hands on tutorial on psychophysics and signal detection theory in MATLAB. At 3 pm, we started watching the world cup game which ended with my disappointment 😦

Dinner was brilliant as always! We met Stephanie (who’s giving a talk tomorrow) at the dinner table and we all had a very philosophical discussion in the dinner table (topics: are computational principles generalizable across species? are lfps epi-phenomena? etc)

Marjena (one of our TA) presented her research after dinner, which was obviously followed up by two rounds of mafia.

 

Computational Neuroscience in Vision @ CSHL: Day 2

Day 2 started off with EJ telling us more about the retinal ganglion cells (RGC) and how they form mosaics to cover the entire visual space (a little more also on the diversity of cells and their functions).

Key points I noted:

– RGCs spike timing precision is of the order of 1-2 ms, whereas in general cortical cells have it in the order of ~10ms or greater. There is a lot of correlated firing in RGCs mainly driven by shared inputs (cones). He also emphasized the fact that the morphology of the RGCs change systematically with eccentricity.

–  There is yet no direct evidence of direction selective cells in the primate retina.

– He started discussing the use of STA to estimate RGC receptive fields and  left it for Jonathan to go into details.

Jonathan formulated the LNP (linear- non linear- poison)  model and mainly spoke about its implementation and limitation (i am working on a tutorial based on that).

Lunch followed … It was a nice and sunny day.

Where all the action happens!

Where all the action happens!

Next we had Euro give a talk. First time, we used the black board as he drew most of the stuff to explain it to us more lucidly. There was a mention of avoiding the “Henry Markram way ” of doing research 🙂 Most of the talk was geared towards understanding the basic principles of efficient encoding and decoding. He touched upon maximum likelihood estimation , fischer information etc. A lot to digest! Hopefully the tutorials tonight will shed light on some of the stuff. The lecture ended at 4:20 pm to allow us watch Brazil humiliate themselves again.

Brazil (0) vs Netherland (3)

Brazil (0) vs Netherland (3)

 

The day ended with some of us playing mafia (which i believe will be repeated in more exciting and intense ways in days to come).

"Mafia" in action

“Mafia” in action

Computational Neuroscience in Vision @ CSHL: Day 1

The first day of the course starts off with a solid breakfast.

The course is taking place at the Banbury conference center. The session began with a general introduction of all organizers, TAs, lecturers and students. One striking thing is how many of us work with animal models (like 60%). Off-course from here on the main focus was on Tony Movshon. We started feeling his presence right away. The mood is always maintained light with a few jokes here and there.

Key components of Tony’s talk

Synopsis : He mainly talked about three things. 1.what features a visual image generally comprises of, 2. how they get encoded in the early visual systems and 3. how these encoded signals are later decoded higher up.

1. components of visual image: the plenoptic function (x,y,t, lambda, Vx, Vy Vz) and the elements of early vision (gotta read Adelson and Bergen 1991; it has 1179 citations!).  Rather than the values of each parameter, their derivatives are more informative. A cool thing he said is that you can think of motion as orientation in space (x) and time, and disparity as orientation in space and eye position.

2. visual information encoding: He talked about the functions of retina,followed by an explanation of the spatial contrast sensitivity (another very good paper to be read: Enroth-Cugell and Robson 1966). He also touched on centre surround receptive fields, ON and OFF cells, issues with assuming linearity.

3. decoding visual information: This was my favorite part (filled with choice probability, MT recordings, pooled activity etc).

Quote of the day was: ” The brain works the way it does because it’s made of meat, and meat is not deterministic” ..

Some cool applications of dimensionality reduction was also shown here (Yu et al. 2009).

Lunch followed …

 

140711_130801

Outside the Banbury Conference Center

140711_131405

Side effects of FIFA 2014

After lunch we had  a couple of more talks. Jonathan went over stimulus encoding, decoding, a probability primer (conditionalization, likelihood functions etc: I will soon add my own self tutoring Matlab implementation of this section here). And EJ went over retina in details ( emphasizing how crucial retinal functions are and how features like adaptation, center surround organization etc start at the level of the photo-receptors). He emphasized on retina being a non linear system as well.

After the talks ended, we had a 3 person/side soccer match (where i scored a goal) including Jonathan Pillow (as we figured out- he’s pretty fit), followed by dinner and MATLAB tutorials.

« Older Entries