Monday, August 31, 2015

Cats on Treadmills (and the plasticity of biological motion perception)


Cats on a treadmill. From Treadmill Kittens.


It's been an eventful week. The 10th Anniversary of Hurricane Katrina. The 10th Anniversary of Optogenetics (with commentary from the neuroscience community and from the inventors). The Reproducibility Project's efforts to replicate 100 studies in cognitive and social psychology (published in Science). And the passing of the great writer and neurologist, Oliver Sacks. Oh, and Wes Craven just died too...

I'm not blogging about any of these events. Many many others have already written about them (see selected reading list below). And The Neurocritic has been feeling tapped out lately.

Hence the cats on treadmills. They're here to introduce a new study which demonstrated that early visual experience is not necessary for the perception of biological motion (Bottari et al., 2015). Biological motion perception involves the ability to understand and visually track the movement of a living being. This phenomenon is often studied using point light displays, as shown below in a demo from the BioMotion Lab. You should really check out their flash animation that allows you to view human, feline, and pigeon walkers moving from right to left, scrambled and unscrambled, masked and unmasked, inverted and right side up.






Biological Motion Perception Is Spared After Early Visual Deprivation

People born with dense, bilateral cataracts that are surgically removed at a later date show deficits in higher visual processing, including the perception of global motion, global form, faces, and illusory contours. Proper neural development during the critical, or sensitive period early in life is dependent on experience, in this case visual input. However, it seems that the perception of biological motion (BM) does not require early visual experience (Bottari et al., 2015).

Participants in the study were 12 individuals with congenital cataracts that were removed at a mean age of 7.8 years (range 4 months to 16 yrs). Age at testing was 17.8 years (range 10-35 yrs). The study assessed their biological motion thresholds (extracting BM from noise) and recorded their EEG to point light displays of a walking man and to scrambled versions of the walking man (see demo).





Behavioral performance on the BM threshold task didn't differ much between the congenital cataract (cc) and matched control (mc) groups (i.e., there was a lot of overlap between the filled diamonds and the open triangles below).

Modified from Fig. 1 (Bottari et al., 2015).


The event-related potentials (ERPs) averaged to presentations of the walking man vs. scrambled man showed the same pattern in cc and mc groups as well: larger to walking man (BM) than scrambled man (SBM).

Modified from Fig. 1 (Bottari et al., 2015).


The N1 component (the peak at about 0.25 sec post-stimulus) seems a little smaller in cc but that wasn't significant. On the other hand, the earlier P1 was significantly reduced in the cc group. Interestingly, the duration of visual deprivation, amount of visual experience, and post-surgical visual acuity did not correlate with the size of the N1.

The authors discuss three possible explanations for these results:
(1) The neural circuitries associated with the processing of BM can specialize in late childhood or adulthood. That is, as soon as visual input becomes available, initiates the functional maturation of the BM system. Alternatively the neural systems for BM might mature independently of vision. (2) Either they are shaped cross-modally or (3) they mature independent of experience.

They ultimately favor the third explanation, that "the neural systems for BM specialize independently of visual experience." They also point out that the ERPs to faces vs. scrambled faces in the cc group do not show the characteristic difference between these stimulus types. What's so special about biological motion, then? Here the authors wave their hands and arms a bit:
We can only speculate why these different developmental trajectories for faces and BM emerge: BM is characteristic for any type of living being and the major properties are shared across species. ... By contrast, faces are highly specific for a species and biases for the processing of faces from our own ethnicity and age have been shown.

It's more important to see if a bear is running towards you than it is to recognize faces, as anyone with congenital prosopagnosia ("face blindness") might tell you...


Footnote

1 Troje & Westhoff (2006):
"The third sequence showed a walking cat. The data are based on a high-speed (200 fps) video sequence showing a cat walking on a treadmill. Fourteen feature points were manually sampled from single frames. As with the pigeon sequence, data were approximated with a third-order Fourier series to obtain a generic walking cycle."


Reference

Bottari, D., Troje, N., Ley, P., Hense, M., Kekunnaya, R., & Röder, B. (2015). The neural development of the biological motion processing system does not rely on early visual input Cortex, 71, 359-367 DOI: 10.1016/j.cortex.2015.07.029






Links to Pieces About Momentous Events

Remembering Katrina in the #BlackLivesMatter Movement by Tracey Ross

Hurricane Katrina Proved That If Black Lives Matter, So Must Climate Justice by Elizabeth Yeampierre

Project Katrina: A Decade of Resilience in New Orleans by Steven Gray

Hurricane Katrina, 10 Years Later, Buzzfeed's Katrina issue

ChR2: Anniversary: Optogenetics, special issue of Nature Neuroscience

ChR2 coming of age, editorial in Nature Neuroscience

Optogenetics and the future of neuroscience by Ed Boyden

Optogenetics: 10 years of microbial opsins in neuroscience by Karl Deisseroth

Optogenetics: 10 years after ChR2 in neurons—views from the community in Nature Neuroscience

10 years of neural opsins by Adam Calhoun

Estimating the reproducibility of psychological science in Science

Reproducibility Project: Psychology on Open Science Framework

How Reliable Are Psychology Studies? by Ed Yong

The Bayesian Reproducibility Project by Alexander Etz

A Life Well Lived, by those who maintain the Oliver Sacks, M.D. website.

Oliver Sacks, Neurologist Who Wrote About the Brain’s Quirks, Dies at 82, NY Times obituary

Oliver Sacks has left the building by Vaughan Bell

My Own Life, Oliver Sacks on Learning He Has Terminal Cancer


Subscribe to Post Comments [Atom]

Sunday, August 09, 2015

Will machine learning create new diagnostic categories, or just refine the ones we already have?


How do we classify and diagnose mental disorders?

In the coming era of Precision Medicine, we'll all want customized treatments that “take into account individual differences in people’s genes, environments, and lifestyles.” To do this, we'll need precise diagnostic tools to identify the specific disease process in each individual. Although focused on cancer in the near-term, the longer-term goal of the White House initiative is to apply Precision Medicine to all areas of health. This presumably includes psychiatry, but the links between Precision Medicine, the BRAIN initiative, and RDoC seem a bit murky at present.1

But there's nothing a good infographic can't fix. Science recently published a Perspective piece by the NIMH Director and the chief architect of the Research Domain Criteria (RDoC) initiative (Insel & Cuthbert, 2015). There's Deconstruction involved, so what's not to like? 2


ILLUSTRATION: V. Altounian and C. Smith / SCIENCE


In this massively ambitious future scenario, the totality of one's genetic risk factors, brain activity, physiology, immune function, behavioral symptom profile, and life experience (social, cultural, environmental) will be deconstructed and stratified and recompiled into a neat little cohort. 3

The new categories will be data driven. The project might start by collecting colossal quantities of expensive data from millions of people, and continue by running classifiers on exceptionally powerful computers (powered by exceptionally bright scientists/engineers/coders) to extract meaningful patterns that can categorize the data with high levels of sensitivity and specificity. Perhaps I am filled with pathologically high levels of negative affect (Loss? Frustrative Nonreward?), but I find it hard to be optimistic about progress in the immediate future. You know, for a Precision Medicine treatment for me (and my pessimism)...

But seriously.

Yes, RDoC is ambitious (and has its share of naysayers). But what you may not know is that it's also trendy! Just the other day, an article in The Atlantic explained Why Depression Needs A New Definition (yes, RDoC) and even cited papers like Depression: The Shroud of Heterogeneity. 4

But let's just focus on the brain for now. For a long time, most neuroscientists have viewed mental disorders as brain disorders. [But that's not to say that environment, culture, experience, etc. play no role! cf. Footnote 3]. So our opening question becomes, How do we classify and diagnose brain disorders neural circuit disorders in a fashion consistent with RDoC principles? Is there really One Brain Network for All Mental Illness, for instance? (I didn't think so.)

Our colleagues in Asia and Australia and Europe and Canada may not have gotten the funding memo, however, and continue to run classifiers based on DSM categories. 5 In my previous post, I promised an unsystematic review of machine learning as applied to the classification of major depression. You can skip directly to the Appendix to see that.

Regardless of whether we use DSM-5 categories or RDoC matrix constructs, what we need are robust and reproducible biomarkers (see Table 1 above). A brief but excellent primer by Woo and Wager (2015) outlined the characteristics of a useful neuroimaging biomarker:
1. Criterion 1: diagnosticity

Good biomarkers should produce high diagnostic performance in classification or prediction. Diagnostic performance can be evaluated by sensitivity and specificity. Sensitivity concerns whether a model can correctly detect signal when signal exists. Effect size is a closely related concept; larger effect sizes are related to higher sensitivity. Specificity concerns whether the model produces negative results when there is no signal. Specificity can be evaluated relative to a range of specific alternative conditions that may be confusable with the condition of interest.

2. Criterion 2: interpretability

Brain-based biomarkers should be meaningful and interpretable in terms of neuroscience, including previous neuroimaging studies and converging evidence from multiple sources (eg, animal models, lesion studies, etc). One potential pitfall in developing neuroimaging biomarkers is that classification or prediction models can capitalize on confounding variables that are not neuroscientifically meaningful or interesting at all (eg, in-scanner head movement). Therefore, neuroimaging biomarkers should be evaluated and interpreted in the light of existing neuroscientific findings.

3. Criterion 3: deployability

Once the classification or outcome-prediction model has been developed as a neuroimaging biomarker, the model and the testing procedure should be precisely defined so that it can be prospectively applied to new data. Any flexibility in the testing procedures could introduce potential overoptimistic biases into test results, rendering them useless and potentially misleading. For example, “amygdala activity” cannot be a good neuroimaging biomarker without a precise definition of which “voxels” in the amygdala should be activated and the relative expected intensity of activity across each voxel. A well-defined model and standardized testing procedure are crucial aspects of turning neuroimaging results into a “research product,” a biomarker that can be shared and tested across laboratories.

4. Criterion 4: generalizability

Clinically useful neuroimaging biomarkers aim to provide predictions about new individuals. Therefore, they should be validated through prospective testing to prove that their performance is generalizable across different laboratories, different scanners or scanning procedures, different populations, and variants of testing conditions (eg, other types of chronic pain). Generalizability tests inherently require multistudy and multisite efforts. With a precisely defined model and standardized testing procedure (criterion 3), we can easily test the generalizability of biomarkers and define the boundary conditions under which they are valid and useful.
[Then the authors evaluated the performance of a structural MRI signature for IBS presented in an accompanying paper.]

Should we try to improve on a neuroimaging biomarker (or “neural signature”) for classic disorders in which “Neuroanatomical diagnosis was correct in 80% and 72% of patients with major depression and schizophrenia, respectively...” (Koutsouleris et al., 2015)? That study used large cohorts and evaluated the trained biomarker against an independent validation database (i.e., it was more thorough than many other investigations). Or is the field better served by classifying when loss and agency and auditory perception go awry? What would individualized treatments for these constructs look like? Presumably, the goal is to develop better treatments, and to predict who will respond to a specific treatment(s).

OR should we adopt the surprisingly cynical view of some prominent investigators, who say:
...identifying a genuine neural signature would necessitate the discovery of a specific pattern of brain responses that possesses nearly perfect sensitivity and specificity for a given condition or other phenotype. At the present time, neuroscientists are not remotely close to pinpointing such a signature for any psychological disorder or trait...

If that's true, then we'll have an awfully hard time with our resting state fMRI classifier for neuro-nihilism.


Footnotes

1 Although NIMH Mad Libs does a bang up job...

2 Derrida's Deconstruction and RDoc are diametrically opposed, as irony would have it.

3 Or maybe an n of 1...  I'm especially curious about how life experience will be incorporated into the mix. Perhaps the patient of the future will upload all the data recorded by their memory implants, as in The Entire History of You (an episode of Black Mirror).

4 The word “shroud” always makes everything sound so dire and deathly important... especially when used as a noun.

5 As do many research groups in the US. This is meant to be snarky, but not condescending to anyone who follows DSM-5 in their research.


References

Insel, T., & Cuthbert, B. (2015). Brain disorders? Precisely. Science, 348 (6234), 499-500 DOI: 10.1126/science.aab2358

Woo, C., & Wager, T. (2015). Neuroimaging-based biomarker discovery and validation. PAIN, 156 (8), 1379-1381 DOI: 10.1097/j.pain.0000000000000223



Appendix

Below are 34 references on MRI/fMRI applications of machine learning used to classify individuals with major depression (I excluded EEG/MEG for this particular unsystematic review). The search terms were combinations of "major depression" "machine learning" "support vector" "classifier".

Here's a very rough summary of methods:

Structural MRI: 1, 14, 22, 29, 31, 32

DTI: 6, 12, 18, 19

Resting State fMRI: 3, 5, 8, 9 11, 16, 17, 21, 28, 33

fMRI while viewing different facial expressions: 2, 7, 10, 24, 26, 27, 34

comorbid panic: 13

verbal working memory: 25

guilt: 15 (see The Idiosyncratic Side of Diagnosis by Brain Scan and Machine Learning)

Schizophrenia vs. Bipolar vs. Schizoaffective: 16

Psychotic Major Depression vs. Bipolar Disorder: 20

Schizophrenia vs. Major Depression: 23, 31

Unipolar vs. Bipolar Depression: 24, 32, 34

This last one is especially important, since an accurate diagnosis can avoid the potentially disastrous prescribing of antidepressants in bipolar depression.

Idea that may already be implemented somewhere: Individual labs or research groups could perhaps contribute to a support vector machine clearing house (e.g., at NTRIC or OpenfMRI or GitHub) where everyone can upload the code for data processing streams and various learning/classification algorithms to try out on each others' data.

1.
Brain. 2012 May;135(Pt 5):1508-21. doi: 10.1093/brain/aws084.
Multi-centre diagnostic classification of individual structural neuroimaging scans from patients with major depressive disorder.
Mwangi B Ebmeier KP, Matthews K, Steele JD.

2.
Bipolar Disord. 2012 Jun;14(4):451-60. doi: 10.1111/j.1399-5618.2012.01019.x.
Pattern recognition analyses of brain activation elicited by happy and neutral faces in unipolar and bipolar depression.
Mourão-Miranda J Almeida JR, Hassel S, de Oliveira L, Versace A, Marquand AF, Sato JR, Brammer M, Phillips ML.

3.
PLoS One. 2012;7(8):e41282. doi: 10.1371/journal.pone.0041282. Epub 2012 Aug 20.
Changes in community structure of resting state functional connectivity in unipolar depression.
Lord A Horn D, Breakspear M, Walter M.

5.
Neuroreport. 2012 Dec 5;23(17):1006-11. doi: 10.1097/WNR.0b013e32835a650c.
Machine learning classifier using abnormal brain network topological metrics in major depressive disorder.
Guo H Cao X, Liu Z, Li H, Chen J, Zhang K.

6.
PLoS One. 2012;7(9):e45972. doi: 10.1371/journal.pone.0045972. Epub 2012 Sep 26.
Increased cortical-limbic anatomical network connectivity in major depression revealed by diffusion tensor imaging.
Fang P Zeng LL, Shen H, Wang L, Li B, Liu L, Hu D.

7.
PLoS One. 2013;8(4):e60121. doi: 10.1371/journal.pone.0060121. Epub 2013 Apr 1.
What does brain response to neutral faces tell us about major depression? evidence from machine learning and fMRI.
Oliveira L Ladouceur CD, Phillips ML, Brammer M, Mourao-Miranda J.

8.
Hum Brain Mapp. 2014 Apr;35(4):1630-41. doi: 10.1002/hbm.22278. Epub 2013 Apr 24.
Unsupervised classification of major depression using functional connectivity MRI.
Zeng LL Shen H, Liu L, Hu D.

9.
Psychiatry Clin Neurosci. 2014 Feb;68(2):110-9. doi: 10.1111/pcn.12106. Epub 2013 Oct 31.
Aberrant functional connectivity for diagnosis of major depressive disorder: a discriminant analysis.

10.
Neuroimage. 2015 Jan 15;105:493-506. doi: 10.1016/j.neuroimage.2014.11.021. Epub 2014 Nov 15.
Sparse network-based models for patient classification using fMRI.
Rosa MJ Portugal L Hahn T Fallgatter AJ Garrido MI Shawe-Taylor J Mourao-Miranda J.

11.
Proc IEEE Int Symp Biomed Imaging. 2014 Apr;2014:246-249.
ELUCIDATING BRAIN CONNECTIVITY NETWORKS IN MAJOR DEPRESSIVE DISORDER USING CLASSIFICATION-BASED SCORING.
Sacchet MD Prasad G Foland-Ross LC Thompson PM Gotlib IH.

12.
Front Psychiatry. 2015 Feb 18;6:21. doi: 10.3389/fpsyt.2015.00021. eCollection 2015.
Support vector machine classification of major depressive disorder using diffusion-weighted neuroimaging and graph theory.
Sacchet MD Prasad G Foland-Ross LC Thompson PM Gotlib IH.

13.
J Affect Disord. 2015 Sep 15;184:182-92. doi: 10.1016/j.jad.2015.05.052. Epub 2015 Jun 6.
Separating depressive comorbidity from panic disorder: A combined functional magnetic resonance imaging and machine learning approach.
Lueken U Straube B Yang Y Hahn T Beesdo-Baum K Wittchen HU Konrad C Ströhle A Wittmann A Gerlach AL Pfleiderer B, Arolt V, Kircher T.

14.
PLoS One. 2015 Jul 17;10(7):e0132958. doi: 10.1371/journal.pone.0132958. eCollection 2015.
Structural MRI-Based Predictions in Patients with Treatment-Refractory Depression (TRD).
Johnston BA Steele JD Tolomeo S Christmas D Matthews K.

15.
Psychiatry Res. 2015 Jul 5. pii: S0925-4927(15)30025-1. doi: 10.1016/j.pscychresns.2015.07.001. [Epub ahead of print]
Machine learning algorithm accurately detects fMRI signature of vulnerability to major depression.
Sato JR Moll J Green S Deakin JF Thomaz CE Zahn R.

16.
Neuroimage. 2015 Jul 24. pii: S1053-8119(15)00674-6. doi: 10.1016/j.neuroimage.2015.07.054. [Epub ahead of print]
A group ICA based framework for evaluating resting fMRI markers when disease categories are unclear: Application to schizophrenia, bipolar, and schizoaffective disorders.
Du Y Pearlson GD Liu J Sui J Yu Q He H Castro E Calhoun VD.

17.
Neuroreport. 2015 Aug 19;26(12):675-80. doi: 10.1097/WNR.0000000000000407.
Predicting clinical responses in major depression using intrinsic functional connectivity.
Qin J, Shen H, Zeng LL, Jiang W, Liu L, Hu D.

18.
J Affect Disord. 2015 Jul 15;180:129-37. doi: 10.1016/j.jad.2015.03.059. Epub 2015 Apr 4.
Altered anatomical patterns of depression in relation to antidepressant treatment: Evidence from a pattern recognition analysis on the topological organization of brain networks.
Qin J, Wei M, Liu H Chen J Yan R Yao Z Lu Q.

19.
Magn Reson Imaging. 2014 Dec;32(10):1314-20. doi: 10.1016/j.mri.2014.08.037. Epub 2014 Aug 29.
Abnormal hubs of white matter networks in the frontal-parieto circuit contribute to depression discrimination via pattern classification.
Qin J, Wei M, Liu H Chen J Yan R Hua L Zhao K Yao Z Lu Q.

20.
Biomed Res Int. 2014;2014:706157. doi: 10.1155/2014/706157. Epub 2014 Jan 19.
Neuroanatomical classification in a population-based sample of psychotic major depression and bipolar I disorder with 1 year of diagnostic stability.
Serpa MH, Ou Y Schaufelberger MS Doshi J Ferreira LK Machado-Vieira R Menezes PR Scazufca M Davatzikos C Busatto GF Zanetti MV.

21.
Psychiatry Res. 2013 Dec 30;214(3):306-12. doi: 10.1016/j.pscychresns.2013.09.008. Epub 2013 Oct 7.
Identifying major depressive disorder using Hurst exponent of resting-state brain networks.
Wei M Qin J, Yan R, Li H, Yao Z, Lu Q.

22.
J Psychiatry Neurosci. 2014 Mar;39(2):78-86.
Characterization of major depressive disorder using a multiparametric classification approach based on high resolution structural images.
Qiu L Huang X Zhang J Wang Y Kuang W Li J Wang X Wang L Yang X Lui S Mechelli A Gong Q2.

23.
PLoS One. 2013 Jul 2;8(7):e68250. doi: 10.1371/journal.pone.0068250. Print 2013.
Convergent and divergent functional connectivity patterns in schizophrenia and depression.
Yu Y Shen H, Zeng LL, Ma Q, Hu D.

24.
Eur Arch Psychiatry Clin Neurosci. 2013 Mar;263(2):119-31. doi: 10.1007/s00406-012-0329-4. Epub 2012 May 26.
Discriminating unipolar and bipolar depression by means of fMRI and pattern classification: a pilot study.
Grotegerd D Suslow T, Bauer J, Ohrmann P, Arolt V, Stuhrmann A, Heindel W, Kugel H, Dannlowski U.

25.
Neuroreport. 2008 Oct 8;19(15):1507-11. doi: 10.1097/WNR.0b013e328310425e.
Neuroanatomy of verbal working memory as a diagnostic biomarker for depression.
Marquand AF Mourão-Miranda J, Brammer MJ, Cleare AJ, Fu CH.

26.
Biol Psychiatry. 2008 Apr 1;63(7):656-62. Epub 2007 Oct 22.
Pattern classification of sad facial processing: toward the development of neurobiological markers in depression.
Fu CH Mourao-Miranda J, Costafreda SG, Khanna A, Marquand AF, Williams SC, Brammer MJ.

27.
Neuroreport. 2009 May 6;20(7):637-41. doi: 10.1097/WNR.0b013e3283294159.
Neural correlates of sad faces predict clinical remission to cognitive behavioural therapy in depression.
Costafreda SG Khanna A, Mourao-Miranda J, Fu CH.

28.
Magn Reson Med. 2009 Dec;62(6):1619-28. doi: 10.1002/mrm.22159.
Disease state prediction from resting state functional connectivity.
Craddock RC Holtzheimer PE 3rd, Hu XP, Mayberg HS.

29.
Neuroimage. 2011 Apr 15;55(4):1497-503. doi: 10.1016/j.neuroimage.2010.11.079. Epub 2010 Dec 3.
Prognostic prediction of therapeutic response in depression using high-field MR imaging.
Gong Q Wu Q, Scarpazza C, Lui S, Jia Z, Marquand A, Huang X, McGuire P, Mechelli A.

30.
Neuroimage. 2012 Jun;61(2):457-63. doi: 10.1016/j.neuroimage.2011.11.002. Epub 2011 Nov 7.
Diagnostic neuroimaging across diseases.
Klöppel S Abdulkadir A, Jack CR Jr, Koutsouleris N, Mourão-Miranda J, Vemuri P.

31.
Brain. 2015 Jul;138(Pt 7):2059-73. doi: 10.1093/brain/awv111. Epub 2015 May 1.
Individualized differential diagnosis of schizophrenia and mood disorders using neuroanatomical biomarkers.
Koutsouleris N Meisenzahl EM Borgwardt S Riecher-Rössler A Frodl T Kambeitz J Köhler Y Falkai P Möller HJ Reiser M Davatzikos C.

32.
JAMA Psychiatry. 2014 Nov;71(11):1222-30. doi: 10.1001/jamapsychiatry.2014.1100.
Brain morphometric biomarkers distinguishing unipolar and bipolar depression. A voxel-based morphometry-pattern classification approach.
Redlich R Almeida JJ Grotegerd D Opel N Kugel H Heindel W Arolt V Phillips ML Dannlowski U.

33.
Brain Behav. 2013 Nov;3(6):637-48. doi: 10.1002/brb3.173. Epub 2013 Sep 22.
A reversal coarse-grained analysis with application to an altered functional circuit in depression.
Guo S Yu Y Zhang J Feng J.

34.
Hum Brain Mapp. 2014 Jul;35(7):2995-3007. doi: 10.1002/hbm.22380. Epub 2013 Sep 13.
Amygdala excitability to subliminally presented emotional faces distinguishes unipolar and bipolar depression: an fMRI and pattern classification study.
Grotegerd D Stuhrmann A, Kugel H, Schmidt S, Redlich R, Zwanzger P, Rauch AV, Heindel W, Zwitserlood P, Arolt V, Suslow T, Dannlowski U.

Subscribe to Post Comments [Atom]

Saturday, August 01, 2015

The Idiosyncratic Side of Diagnosis by Brain Scan and Machine Learning


R2D3 recently had a fantastic Visual Introduction to Machine Learning, using the classification of homes in San Francisco vs. New York as their example. As they explain quite simply:
In machine learning, computers apply statistical learning techniques to automatically identify patterns in data. These techniques can be used to make highly accurate predictions.
You should really head over there right now to view it, because it's very impressive.


Computational neuroscience types are using machine learning algorithms to classify all sorts of brain states, and diagnose brain disorders, in humans. How accurate are these classifications? Do the studies all use separate training sets and test sets, as shown in the example above?

Let's say your fMRI measure is able to differentiate individuals with panic disorder (n=33) from those with panic disorder + depression (n=26) with 79% accuracy.1 Or with structural MRI scans you can distinguish 20 participants with treatment-refractory depression from 21 never-depressed individuals with 85% accuracy.2 Besides the issues outlined in the footnotes, the reality check is that the model must be able to predict group membership for a new (untrained) data set. And most studies don't seem to do this.

I was originally drawn to the topic by a 3 page article entitled, Machine learning algorithm accurately detects fMRI signature of vulnerability to major depression (Sato et al., 2015). Wow! Really? How accurate? Which fMRI signature? Let's take a look.
  • machine learning algorithm = Maximum Entropy Linear Discriminant Analysis (MLDA)
  • accurately predicts = 78.3% (72.0% sensitivity and 85.7% specificity)
  • fMRI signature = guilt-selective anterior temporal functional connectivity changes (seems a bit overly specific and esoteric, no?)
  • vulnerability to major depression = 25 participants with remitted depression vs. 21 never-depressed participants
The authors used a standard leave-one-subject-out procedure in which the classification is cross-validated iteratively by using a model based on the sample after excluding one subject to independently predict group membership but they did not test their fMRI signature in completely independent groups of participants.

Nor did they try to compare individuals who are currently depressed to those who are currently remitted. That didn't matter, apparently, because the authors suggest the fMRI signature is a trait marker of vulnerability, not a state marker of current mood. But the classifier missed 28% of the remitted group who did not have the guilt-selective anterior temporal functional connectivity changes.”

What is that, you ask? This is a set of mini-regions (i.e., not too many voxels in each) functionally connected to a right superior anterior temporal lobe seed region of interest during a contrast of guilt vs. anger feelings (selected from a number of other possible emotions) for self or best friend, based on written imaginary scenarios like “Angela [self] does act stingily towards Rachel [friend]” and “Rachel does act stingily towards Angela” conducted outside the scanner (after the fMRI session is over). Got that?

You really need to read a bunch of other articles to understand what that means, because the current paper is less than 3 pages long. Did I say that already?


modified from Fig 1B (Sato et al., 2015). Weight vector maps highlighting voxels among the 1% most discriminative for remitted major depression vs. controls, including the subgenual cingulate cortex, both hippocampi, the right thalamus and the anterior insulae.


The patients were previously diagnosed according to DSM-IV-TR (which was current at the time), and in remission for at least 12 months. The study was conducted by investigators from Brazil and the UK, so they didn't have to worry about RDoC, i.e. “new ways of classifying mental disorders based on behavioral dimensions and neurobiological measures” (instead of DSM-5 criteria). A “guilt-proneness” behavioral construct, along with the “guilt-selective” network of idiosyncratic brain regions, might be more in line with RDoC than past major depression diagnosis.

Could these results possibly generalize to other populations of remitted and never-depressed individuals? Well, the fMRI signature seems a bit specialized (and convoluted). And overfitting is another likely problem here...

In their next post, R2D3 will discuss overfitting:
Ideally, the [decision] tree should perform similarly on both known and unknown data.

So this one is less than ideal. [NOTE: the one that's 90% in the top figure]

These errors are due to overfitting. Our model has learned to treat every detail in the training data as important, even details that turned out to be irrelevant.

In my next post, I'll present an unsystematic review of machine learning as applied to the classification of major depression. It's notable that Sato et al. (2015) used the word “classification” instead of “diagnosis.”3


ADDENDUM (Aug 3 2015): In the comments, I've presented more specific critiques of: (1) the leave-one-out procedure and (2) how the biomarker is temporally disconnected from when the participants identify their feeling as 'guilt' or 'anger' or etc. (and why shame is more closely related to depression than guilt).


Footnotes

1 The sensitivity (true positive rate) was 73% and the specificity (true negative rate) was 85%. After correcting for confounding variables, these numbers were 77% and 70%, respectively.

2 The abstract concludes this is a “high degree of accuracy.” Not to pick on these particular authors (this is a typical study), but Dr. Dorothy Bishop explains why this is not very helpful for screening or diagnostic purposes. And what you'd really want to do here is to discriminate between treatment-resistant vs. treatment-responsive depression. If an individual does not respond to standard treatments, it would be highly beneficial to avoid a long futile period of medication trials.

3 In case you're wondering, the title of this post was based on The Dark Side of Diagnosis by Brain Scan, which is about Dr  Daniel Amen. The work of the investigators discussed here is in no way, shape, or form related to any of the issues discussed in that post.


Reference

Sato, J., Moll, J., Green, S., Deakin, J., Thomaz, C., & Zahn, R. (2015). Machine learning algorithm accurately detects fMRI signature of vulnerability to major depression Psychiatry Research: Neuroimaging DOI: 10.1016/j.pscychresns.2015.07.001

Subscribe to Post Comments [Atom]

Sunday, July 19, 2015

Scary Brains and the Garden of Earthly Deep Dreams


In case you've been living under a rock the past few weeks, Google's foray into artificial neural networks has yielded hundreds of thousands of phantasmagoric images. The company has an obvious interest in image classification, and here's how they explain the DeepDream process in their Research Blog:
Inceptionism: Going Deeper into Neural Networks

. . .
We train an artificial neural network by showing it millions of training examples [of dogs and eyes and pagodas, let's say] and gradually adjusting the network parameters until it gives the classifications we want. The network typically consists of 10-30 stacked layers of artificial neurons. Each image is fed into the input layer, which then talks to the next layer, until eventually the “output” layer is reached. The network’s “answer” comes from this final output layer.

. . .
One way to visualize what goes on is to turn the network upside down and ask it to enhance an input image in such a way as to elicit a particular interpretation. Say you want to know what sort of image would result in “Banana.” Start with an image full of random noise, then gradually tweak the image towards what the neural net considers a banana... By itself, that doesn’t work very well, but it does if we impose a prior constraint that the image should have similar statistics to natural images, such as neighboring pixels needing to be correlated.

After Google released the deepdream code on GitHub, Psychic VR Lab set up a Deep Dream web interface, which currently has over 300,000 groovy and scary images.

I've taken an interest in the hallucinogenic and distorted brain images, including the one above. I can't properly credit the human input interface (which wasn't me), but I found it after a submitting a file of my own in the early stages of http://psychic-vr-lab.com/deepdream/.  I can't find the url hosting my image, but I came across the frightening brain here, along with the original.





I've included a few more for your viewing pleasure. Brain Decoder posted a dreamy mouse hippocampus Brainbow.




Here's one by HofmannsBicycle.



And a fun fave courtesy of @rogierK and @katestorrs. This one is cartoonish instead of menacing.



Rogier said: "According to #deepdream the homunculus in our brains is a terrifying bird-dog hybrid."

Aw, I thought it was kind of cute. More small birds, fewer staring judgmental eyeballs.


And the grand finale isn't a brain at all. But who doesn't want to see the dreamified version of The Garden of Earthly Delights, by Hieronymus Bosch? Here it is, via @aut0mata. Click on image for a larger view.
 




When nothing's right, just close your eyes
Close your eyes and you're gone

-Beck, Dreams




ADDENDUM (July 21 2015): It's worth reading Deepdream: Avoiding Kitsch by Josh Nimoy, which confirms the training set was filled with dogs, birds, and pagodas. Nimoy also shows deepdream images done with neural networks trained on other datasets.

For example, the image below was generated by a neural network trained to do gender classification.

Read more »

Subscribe to Post Comments [Atom]

Tuesday, July 14, 2015

Can Tetris Reduce Intrusive Memories of a Trauma Film?



For some inexplicable reason, you watched the torture gore horror film Hostel over the weekend. On Monday, you're having trouble concentrating at work. Images of severed limbs and bludgeoned heads keep intruding on your attempts to code or write a paper. So you decide to read about the making of Hostel.You end up seeing pictures of the most horrifying scenes from the movie. It's all way too way much to simply shake off so then you decide to play Tetris.

But a funny thing happens. The unwelcome images start to become less frequent. By Friday, the gory mental snapshots are no longer forcing their way into your mind's eye. The ugly flashbacks are gone.

Meanwhile, your parnter in crime is having similar images of eye gouging pop into his head. Except he didn't review the tortuous highlights on Monday, and he didn't play Tetris. He continues to have involuntary intrusions of Hostel images once or twice a day for the rest of the week.

This is basically the premise (and outcome) of a new paper in Psychological Science by Ella James and colleagues at Cambridge and Oxford. It builds on earlier work suggesting that healthy participants who play Tetris shortly after watching a “trauma” film will have fewer intrusive memories (Holmes et al, 2009, 2010). This is based on the idea that involuntary “flashbacks” in real post-traumatic stress disorder (PTSD) are visual in nature, and require visuospatial processing resources to generate and maintain. Playing Tetris will interfere with consolidation and subsequent intrusion of the images, at least in an experimental setting (Holmes et al, 2009):
...Trauma flashbacks are sensory-perceptual, visuospatial mental images. Visuospatial cognitive tasks selectively compete for resources required to generate mental images. Thus, a visuospatial computer game (e.g. "Tetris") will interfere with flashbacks. Visuospatial tasks post-trauma, performed within the time window for memory consolidation [6 hrs], will reduce subsequent flashbacks. We predicted that playing "Tetris" half an hour after viewing trauma would reduce flashback frequency over 1-week.

The timing is key here. In the earlier experiments, Tetris play commenced 30 min after the trauma film experience, during the 6 hour window when memories for the event are stabilized and consolidated. Newly formed memories are thought to be malleable during this time.

However, if one wants to extrapolate directly to clinical application in cases of real life trauma exposure (and this is problematic, as we'll see later), it's pretty impractical to play Tetris right after an earthquake, auto accident, mortar attack, or sexual assault. So the new paper relies on the process of reconsolidation, when an act of remembering will place the memory in a labile state once again, so it can be modified (James et al., 2015).




The procedure was as follows: 52 participants came into the lab on Day 0 and completed questionnaires about depression, anxiety, and previous trauma exposure. Then they watched a 12 min trauma film that included 11 scenes of actual death (or threatened death) or serious injury (James et al., 2015):
...the film functioned as an experimental analogue of viewing a traumatic event in real life. Scenes contained different types of context; examples include a young girl hit by a car with blood dripping out of her ear, a man drowning in the sea, and a van hitting a teenage boy while he was using his mobile phone crossing the road. This film footage has been used in previous studies to evoke intrusive memories...

After the film, they rated “how sad, hopeless, depressed, fearful, horrified, and anxious they felt right at this very moment” and “how distressing did you find the film you just watched?” They were instructed to keep a diary of intrusive images and come back to the lab 24 hours later.

On Day 1, participants were randomized to either the experimental group (memory reactivation + Tetris) or the control group (neither manipulation). The experimental group viewed 11 still images from the film that served as reminder cues to initiate reconsolidation. This was followed by a 10 min filler task and then 12 min of playing Tetris (the Marathon mode shown above). The game instructions aimed to maximize the amount of mental rotation the subjects would use. The controls did the filler task and then sat quietly for 12 min.

Both groups kept a diary of intrusions for the next week, and then returned on Day 7. All participants performed the Intrusion Provocation Task (IPT). Eleven blurred pictures from the film were shown, and subjects indicated when any intrusive mental images were provoked. Finally, the participants completed a few more questionnaires, as well as a recognition task that tested their verbal (T/F written statements) and visual (Y/N for scenes) memories of the film.1

The results indicated that the Reactivation + Tetris manipulation was successful in decreasing the number of visual memory intrusions in both the 7-day diary and the IPT (as shown below).


modified from Fig. 1 (James et al., 2015). Asterisks indicate a significant difference between groups (**p < .001). Error bars represent +1 SEM.


Cool little snowman plots (actually frequency scatter plots) illustrate the time course of intrusive memories in the two groups.


modified from Fig. 2 (James et al., 2015). Frequency scatter plots showing the time course of intrusive memories reported in the diary daily from Day 0 (prior to intervention) to Day 7. The intervention was on Day 1, and the red arrow is 24 hrs later (when the intervention starts working). The solid lines are the results of a generalized additive model. The size of the bubbles represents the number of participants who reported the indicated number of intrusive memories on that particular day.


But now, you might be asking yourself if the critical element was Tetris or the reconsolidation update procedure (or both), since the control group did neither. Not to worry. Experiment 2 tried to disentangle this by recruiting four groups of participants (n=18 in each) the original two groups plus two new ones: Reactivation only and Tetris only.

And the results from Exp. 2 demonstrated that both were needed.


modified from Fig. 4 (James et al., 2015). Asterisks indicate that results for the Reactivation + Tetris group were significantly different from results for the other three groups (*p < .01). Error bars represent +1 SEM. The No-Task Control and Tetris Only groups did not differ for diary intrusions (n.s.).


The authors' interpretation:
Overall, the results of the present experiments indicate that the frequency of intrusive memories induced by experimental trauma can be reduced by disrupting reconsolidation via a competing cognitive-task procedure, even for established memories (here, events viewed 24 hours previously). ... Critically, neither playing Tetris alone (a nonreactivation control condition) nor the control of memory reactivation alone was sufficient to reduce intrusions... Rather, their combination is required, which supports a reconsolidation-theory account. We suggest that intrusive-memory reduction is due to engaging in a visuospatial task within the window of memory reconsolidation, which interferes with intrusive image reconsolidation (via competition for shared resources).

Surprisingly (perhaps), I don't have anything negative to say about the study. It was carefully conducted and interpreted with restraint. They don't overextrapolate to PTSD. They don't use the word “flashback” to describe the memory phenomenon. And they repeatedly point out that it's “experimental trauma.” I actually considered reviving The Neurocomplimenter for this post, but that would be going too far...

Compare this flattering post with one I wrote in 2010, about a related study by the same authors (Holmes et al.. 2010). That paper certainly had a modest title: Key Steps in Developing a Cognitive Vaccine against Traumatic Flashbacks: Visuospatial Tetris versus Verbal Pub Quiz.

Cognitive vaccine. Traumatic. Flashbacks. Twelve mentions of PTSD. This led to ridiculous headlines like Doctors Prescribing 'Tetris Therapy'.

Here, let me fix that for you:

Tetris Helps Prevent Unpleasant Memories of Gory Film in Happy People

My problem wasn't with the actual study, but with the way the authors hyped the results and exaggerated their clinical significance. So I'm pleased to see a more restrained approach here.


The media coverage for the new paper was generally more accurate too:

Can playing Tetris reduce intrusive memories? (Medical News Today)

Moving tiles as an unintrusive way to handle flashbacks (Medical Express)

Intrusiveness of Old Emotional Memories Can Be Reduced by Computer Game Play Procedure (APS)

But we can always count on the Daily Mail for a good time: Could playing TETRIS banish bad memories? Retro Nintendo game 'reduces the risk of post-traumatic stress disorder' 2

Gizmodo is a bit hyperbolic as well: Tetris Blocks Flashbacks of Traumatic Events Lodged in the Brain [“lodged in the brain” for all of 24 hrs]


Questions for Now and the Future

Is there really nothing wrong with this study?? Being The Neurocritic, I always have to find something to criticize... and here I had to dig through the Supplemental Material to find issues that may affect the translational potential of Tetris-based interventions.

  • The Intrusion subscale of the Impact of Event Scale (IES-R) was used as an exploratory measure, and subject ratings were between 0 and 1.
The Intrusion subscale consists of 8 questions like “I found myself acting or feeling like I was back at that time” and “I had dreams about it” that are rated from 0 (not at all) to 4 (extremely). The IES-R is given to people after distressing, traumatic life events. These individuals may have actual PTSD symptoms like flashbacks and nightmares.

In Exp. 1, the Reactivation + Tetris group (M = .68) had significantly lower scores (p = .016) on Day 7 than the control group (M = 1.01). BUT this is not terribly meaningful, due to a floor effect. And in Exp. 2 there was no difference between the four groups, with scores ranging from 0.61 to 0.81.3

As an overall comment, watching a film of a girl getting hit by a car is not the same as witnessing it in person (obviously). But this real-life scenario may be the most amenable to Tetris, because the witness was not in the accident themselves and did not know the girl (both of which would heighten the emotional intensity and vividness of the trauma, elements that transcend visual imagery).

It's true that in PTSD, the involuntary intrusion of trauma memories (i.e., flashbacks) have a distinctly sensory quality to them (Ehlers et al. 2004). Visual images are most common, but bodily sensations, sounds, and smells can be incorporated into a multimodal flashback. Or could occur on their own.

  • The effectiveness of the Tetris intervention was related to game score and self-rated task difficulty.
This means that people who were better at playing Tetris showed a greater decrease in intrusive memories. This result wasn't covered in the main paper, but it makes you wonder about cause and effect. Is it because the game was more enjoyable for them? Or could it be that their superior visual-spatial abilities (or greater game experience) resulted in greater interference, perhaps by using up more processing resources? That's always a dicey argument, as you could also predict that better, more efficient game play uses fewer visual-spatial resources.

An interesting recent paper found that individuals with PTSD (who presumably experience intrusive visual memories) have worse allocentric spatial processing abilities than controls (Smith et al., 2015). This means they have problems representing the locations of environmental features relative to each other (instead of relative to the self). So are weak spatial processing and spatial memory abilities caused by the trauma, or are weak spatial abilities a vulnerability factor for developing PTSD?

  • As noted by the authors, the modality-specificity of the intervention needs to be assessed.
Their previous paper showed that the effect was indeed specific to Tetris. A verbally based video game (Pub Quiz) actually increased the frequency of intrusive images (Holmes et al., 2010).

It would be interesting to disentangle the interfering elements of Tetris even further. Would any old mental rotation task do the trick? How about passive viewing of Tetris blocks, or is active game play necessary? Would a visuospatial n-back working memory task work? It wouldn't be as fun, but it obviously uses up visual working memory processing resources. What about Asteroids or Pac-Man or...? 4

This body of work raises a number of interesting questions about the nature of intrusive visual memories, traumatic and non-traumatic alike. Do avid players of action video games (or Tetris) have fewer intrusive memories of past trauma or trauma-analogues in everyday life? I'm not sure this is likely, but you could find out pretty quickly on Amazon Mechanical Turk or one of its alternatives.

There are also many hurdles to surmount before Doctors Prescribe 'Tetris Therapy'. For instance, what does it mean to have the number of weekly Hostel intrusions drop from five to two? How would that scale to an actual trauma flashback, which may involve a fear or panic response?

The authors conclude the paper by briefly addressing these points:
A critical next step is to  investigate  whether  findings  extend  to  reducing  the psychological impact of real-world emotional events and media. Conversely, could computer gaming be affecting intrusions of everyday events?

A number of different research avenues await these investigators (and other interested parties). And — wait for it — a clinical trial of Tetris for flashback reduction has already been completed by the investigators at Oxford and Cambridge!

A Simple Cognitive Task to Reduce the Build-Up of Flashbacks After a Road Traffic Accident (SCARTA)

Holmes and colleagues took the consolidation window very seriously: participants played Tetris in the emergency room within 6 hours of experiencing or witnessing an accident. I'll be very curious to see how this turns out...


Footnotes

1 Interestingly, voluntary retrieval of visual and verbal memories was not affected by the manipulation, highlighting the uniqueness of flashback-like phenomena.

2 It does no such thing. But they did embed a video of Dr. Tom Stafford explaining why Tetris is so compelling...

3 The maximum total score on the IES-R is 32. The mean total score in a group of car accident survivors was 17; in Croatian war veterans it was 25. At first I assumed the authors reported the total score out of 32, rather than the mean score per item. I could be very wrong, however. By way of comparison, the mean item score in female survivors of intimate partner violence was 2.26. Either way, the impact of the trauma film was pretty low in this study, as you might expect.

4 OK, now I'm getting ridiculous. I'm also leaving aside modern first-person shooter games as potentially too traumatic and triggering.


References

Ehlers A, Hackmann A, Michael T. (2004). Intrusive re-experiencing in post-traumaticstress disorder: phenomenology, theory, and therapy. Memory 12(4):403-15.

Holmes EA, James EL, Coode-Bate T, Deeprose C. (2009). Can playing the computer game "Tetris" reduce the build-up of flashbacks for trauma? A proposal from cognitive science. PLoS One 4(1):e4153.

Holmes, E., James, E., Kilford, E., & Deeprose, C. (2010). Key Steps in Developing a Cognitive Vaccine against Traumatic Flashbacks: Visuospatial Tetris versus Verbal Pub Quiz. PLoS ONE, 5 (11) DOI: 10.1371/journal.pone.0013706

James, E., Bonsall, M., Hoppitt, L., Tunbridge, E., Geddes, J., Milton, A., & Holmes, E. (2015). Computer Game Play Reduces Intrusive Memories of Experimental Trauma via Reconsolidation-Update Mechanisms. Psychological Science DOI: 10.1177/0956797615583071

Smith KV, Burgess N, Brewin CR, King JA. (2015). Impaired allocentric spatialprocessing in posttraumatic stress disorder. Neurobiol Learn Mem. 119:69-76.

Subscribe to Post Comments [Atom]

eXTReMe Tracker