Wednesday, April 1, 2009

Verging on Superblindsight?

Blindsight occurs when people have a blind spot in their visual field, due to cortical damage, but with some prodding, retain the capacity to guess (better than chance) that a stimulus has been presented to the blind spot. Apparently, at least one blindsight case who appears to have no awareness of visual perception can nevertheless navigate a hallway littered with obstacles.

In recent research led by Krystel Huxlin, it was shown that people who suffer from blindsight can learn to detect a variety of different types of stimuli presented to their blind spot (scotoma). Reuters offers a brief write-up of Huxlin et al's results. I've posted the abstract from Huxlin et al's paper for some additional details.

Krystel Huxlin et al (2009) "Perceptual Relearning of Complex Visual Motion after V1 Damage in Humans" The Journal of Neuroscience 29(13):3981-3991.

Damage to the adult, primary visual cortex (V1) causes severe visual impairment that was previously thought to be permanent, yet several visual pathways survive V1 damage, mediating residual, often unconscious functions known as "blindsight." Because some of these pathways normally mediate complex visual motion perception, we asked whether specific training in the blind field could improve not just simple but also complex visual motion discriminations in humans with long-standing V1 damage. Global direction discrimination training was administered to the blind field of five adults with unilateral cortical blindness. Training returned direction integration thresholds to normal at the trained locations. Although retinotopically localized to trained locations, training effects transferred to multiple stimulus and task conditions, improving the detection of luminance increments, contrast sensitivity for drifting gratings, and the extraction of motion signal from noise. Thus, perceptual relearning of complex visual motion processing is possible without an intact V1 but only when specific training is administered in the blind field. These findings indicate a much greater capacity for adult visual plasticity after V1 damage than previously thought. Most likely, basic mechanisms of visual learning must operate quite effectively in extrastriate visual cortex, providing new hope and direction for the development of principled rehabilitation strategies to treat visual deficits resulting from permanent visual cortical damage.

Tuesday, March 24, 2009

Erasing a Memory

Researchers in Sheena Josselyn's lab report that a memory trace for a fearful experience can be selectively deleted in mice. Are we on the verge of localizing specific memories in the brain?

The full article can be found in JH Han et al. (2009) "Selective erasure of a fear memory." 323(5920): 1492-6. Here is the abstract:

Memories are thought to be encoded by sparsely distributed groups of neurons. However, identifying the precise neurons supporting a given memory (the memory trace) has been a long-standing challenge. We have shown previously that lateral amygdala (LA) neurons with increased cyclic adenosine monophosphate response element-binding protein (CREB) are preferentially activated by fear memory expression, which suggests that they are selectively recruited into the memory trace. We used an inducible diptheria-toxin strategy to specifically ablate these neurons. Selectively deleting neurons overexpressing CREB (but not a similar portion of random LA neurons) after learning blocked expression of that fear memory. The resulting memory loss was robust and persistent, which suggests that the memory was permanently erase. These results established a causal link between a specific neuronal subpopulation and memory expression, thereby identifying critical neurons within the memory trace.

Sunday, March 22, 2009

Engineering the Next Revolution in Neuroscience

Engineering the Next Revolution in Neuroscience outlines an empirical framework for optimizing discovery in neuroscience. Drawing on illustrations from the molecular and cellular neuroscience of cognition, the framework is rooted in the everyday practice of experimental neuroscience. Toward the optimization of neuroscience, we offer concrete, detailed proposals for studying progress in neuroscience research.

The book is co-authored by Alcino Silva, John Bickle and myself.

Sunday, March 15, 2009

Timeline: Molecular and Cellular Neuroscience of Learning and Memory

Here is an evolving time line of the history of the molecular and cellular neuroscience of memory. I say it's evolving, because it is ridiculously incomplete and I intend to update it quite a bit. If there are any inaccuracies or important omissions, let me know. I've included a few important developments in molecular genetics outside of neuroscience because they made possible later critical research in neurogenetics.

1913 Sturtevant discovers linear order of genes
1920 Sturtevant publishes series of articles entitled "Genetic Studies On Drosophila simulans"
1926 Hermann Joseph Muller introduces X-ray mutagenesis
1950 Katz & Halstead hypothesize that memory traces depend on protein synthesis
1957 Scoville & Milner publish on HM
1960 Curtis &Watkins discover glutamate is major brain NT
1963 Flexner shows memory is affected by protein synthesis in mice
1968 Discovery of PKA by Walsh & Krebs
1971 John O'Keefe discovers place cells
1973 Bliss and Lomo discover LTP
1973 Cohen & Boyer introduce a method for creating recombinant plasmids
1974 Jaenisch creates first transgenic mouse using retrovirus
1978 Dunwiddie & Lynch showed LTP depends on extracellular Ca+
1979 Evans and Watkins discover AMPA receptors using quisqualate
1979 Dunwiddie & Lynch show blocking extracellular Ca+ blocks LTP but leaves synaptic transmission, facilitation and PTP intact
1980 Baudry & Lynch first propose receptor unmasking theory of LTP
1982 Morris shows watermaze performance is hippocampal dependent
1982 Turner, Baimbridge and Miller showed transient increase of extracellular Ca+ is sufficient to induce an LTP-like response
1983 Collingridge finds glutamate acts on NMDA receptors in the hippocampus
1983 Lynch using EGTA shows that hippocampal LTP is intracellular Ca+-dependent
1983 Nairn & Greengard discover CaMKII and that synapsin is one of its substrates
1984 Davis & Squire publish influential review "Protein Synthesis and Memory"
1985 Lisman gives theoretical discussion of how an autophosphorylating kinase could serve as a LTM switch
1986 Morris shows blocking NMDA receptor blocks LTP & spatial learning
1986 Montminy showed cAMP regulates somatostatin expression
1987 Montminy introduces CREB as a regulator of somatostatin transcription
1988 Malenka & Nicoll discover second messenger role of Ca+ in triggering LTP
1988 Yamamoto shows that CREB stimulates cAMP transcription
1989 Gonzalez & Montminy show that cAMP stimulates somatostatin transcription via CREB phosphorylation
1989 Malenka & Nicoll showed that LTP depends on CaMKII phosphorylation
1991 Sheng, Thompson & Greenberg suggest that CREB is regulated by CaMKII (turns out false)
1992 Silva shows that null mutation for CaMKII disrupts LTP + spatial learning, first knockout study in neuroscience of learning and memory
1993 Bliss & Collingridge outline their synaptic model of hippocampal-dependent memory, providing roles for both NMDARs & AMPARs
1994 Bourtchuladze shows LTM but not STM affected in CREB mutants
1995 Bartsch shows that CREB can facilitate synaptic growth in Aplysia
1995 Bannerman & Morris upstairs/downstairs experiment
1995 Lledo Malenka & Nicoll show that CaMKII is sufficient to induce LTP
1995 Isaac, Nicoll & Malenka provide evidence for silent synapses AMPARs
1996 Mayford & Kandel introduce CaMKII transgenics
1996 McHugh & Tonegawa show impaired place fields in NMDAR1 knockouts
1996 Rotenberg, Mayford & Kandel show mice expressing activated CaMKII lack low frequency LTP and do not form stable place fields in CA1

Taxonomies of Experiment III: Silva, Bickle and Landreth

A third taxonomy of experiment can be derived from an article by Alcino Silva (UCLA) published in Journal of Physiology - Paris 101 (2007) 203–213 and work that Silva and I are doing along with John Bickle, who is at the University of Cincinnati. (Bickle and Silva have a related article that will soon be published in the Oxford Handbook of Philosophy and Neuroscience. Bickle is the editor of that volume.) This taxonomy is a work in progress.

The proposed taxonomy of experiment covers some of the same considerations that Craver and Sweatt considered. But it holds that there are 3 broad classes of experiment that are distinguished by their goals. The goals are: 1) description of phenomena, 2) assessment of causal relations among phenomena, and 3) development of tools to facilitate 1 and 2. Let's call experiments of class 1 Descriptive Experiments, those of class 2 Connective Experiments, and those of class 3 Validation Experiments.

Descriptive experiments focus on the dissection and description of phenomena without regard for the evaluation of causal hypotheses, per se. Causal considerations will of course affect the interpretations of one's measurements in these experiments, e.g. in the use of an imaging technique. But the goal of these experiments is not to assess the causal relations among the phenomena that constitute the subject matter. For example, one can dissect the hippocampus and describe its parts without testing hypotheses about the interactions of those parts.

Connective Experiments attempt to determine whether states of phenomena depend on each other. These assessments are made on the basis of manipulations (intervetions) and measurements of the phenomena of interest. There are 3 forms of connective experiment: 1) positive manipulations, which increase the value of an independent variable; 2) negative manipulations, which decrease the value of an independent variable; and 3) neutral measurements, which measure correlation between an independent and dependent variable under normal test conditions (roughly equivalent to Craver's activation experiments).

Validation Experiments validate the use of a tool, demonstrating that it is a reliable means of manipulating or measuring phenomena of interest. For example, the demonstration that knockout mice can be used to reveal the role a protein (e.g. CamKII) plays in both spatial learning and long-term potentiation validated the use of knockouts in the neuroscience of learning and memory. These experiments did not invent the knockout technique of course, but they did adapt a tool for use in neuroscience and led to a swarm of innovative transgenic approaches.

These forms of experiment are not entirely distinct. Validation experiments draw more attention when they simultaneously introduce a tool and reveal undiscovered phenomena or undiscovered causal dependencies. Descriptive experiments are often performed in such a way as to reveal causal information, e.g. that glutamate receptors can be found in pyramidal cells. The three different goals of experiment are mutually dependent, but any one of them can be performed with little regard for the others.

Monday, March 9, 2009

Taxonomies of Experiment II: Carl Craver

In his book Explaining the Brain (2007), Carl Craver argues that there are 3 basic kinds of experiment in neuroscience: interference experiments, stimulation experiments, and activation experiments. The first two kinds of experiment are bottom-up, involving direct interventions on the components of neural mechanism. The third kind of experiment is top-down. According to Craver, these three forms of experiment are used to help neuroscientists discover neural mechanisms. Neural mechanisms are composed of causal processes whose joint function explains a target phenomenon. For example, the mechanism of the action potential is composed of causal processes with parts including ion gradients and ion channels. Finding explanations for phenomena in neuroscience involves determining which processes comprise the mechanism of the phenomenon. The three kinds of experiment in Craver's taxonomy make possible those deteriminations.

Interference and stimulation experiments are bottom-up in the sense that they are attempts to alter the state of a phenomenon to be explained (explanandum) by interfering with component processes of its mechanism. Lesion-studies are the paradigmatic instance of inference experiments. Other instances might include transcranial magnetic stimulation (TMS), genetic knockout, and receptor blockers.

Stimulation experiments are also bottom-up experiments. As the name suggests, these combine the stimulation of mechanism components with the measurement of a dependent variable. Here, microstimulation studies are the paradigm case.

Activation experiments are top-down experiments. That is to say, they involve interventions on a target phenomenon by going through "the normal causal pathway" that affects that target. To illustrate what he means by an activation experiment, Craver writes:

"There are several common varieties of activation experiment at all levels in neuroscience. In PET and fMRI studies, one activates a cognitive system by engaging the experimental subject in some task while monitoring the brain for markers of activity, such as blood flow or changes in oxygenation... In single- and mutli- unit recording experiments, one engages the subject in a task while recording the electrical activity in neurons. In other studies, researchers monitor the production of proteins, or the activation of immediate early genes such as c-fos and c-jun. The experiments leading up to Hodgkin and Huxley’s model of the action potential involved generating action potentials and monitoring single ionic currents while the neuron spiked..."

(Craver 2007, 151)

Compare these categories of experiment with David Sweatt's system and notice that Craver does not include the "determine" class of experiment that Sweatt offers. Nor does Sweatt offer considerations regarding the top-down or bottom-up nature of experiments. Also notice that Craver's class of activation experiments assumes that some form of manipulation is being performed by the experimenter on the neural system. The manipulation might just be a psychological task that is to be performed while measures of neural activity are taken. Or, the manipulation might be some form of stimulating input, such as in the Hodgkin and Huxley example, so long as that input mimics the normal input to the mechanism (in this case, of the action potential). There are no purely observational forms of experiment in Craver's list.

Thursday, March 5, 2009

Taxonomies of Experiment I: David Sweatt

Experiments are of course a major source of epistemic justification for theories in the special sciences. Different kinds of experiments provide different kinds of evidence. I'll post a few examples of taxonomies of experiment that have been offered by neuroscientists-- one that has been developed by Carl Craver (a philosopher of neuroscience at Washington University in St Louis), and one that has been developed by Alcino Silva, myself, and John Bickle.

This first taxonomy is offered by David Sweatt (pronounced "swet"), an outstanding neuroscientist at the University of Alabama at Birmingham (UAB). David Sweatt has been a pioneer in the molecular and cellular neuroscience of cognition. The following is my transcription of his discussion from his book, so expect some outside references to occur in the passage.

quoted excerpt from David J. Sweatt (2003) The Mechanisms of Memory. Elsevier Press, Boston.

"In general there are four basic types of experiments that any scientist can perform. I refer to them as block, measure, mimic, and determine experiments. I have found this categorization a useful mnemonic device throughout my career as a scientist, and, at the risk of sounding overly pedantic, I strongly encourage any young scientist who reads this book to incorporate them into their thinking about experimental design. For example, every time I write or review a paper I ask whether the investigation has included all these different types of experiments. Especially when writing or reviewing grant applications, where multiyear projects are proposed to test a hypothesis comprehensively, I cross-check myself and others on whether all of these approaches (if technically possible) have been applied to the problem at hand. It is important because what we do as scientists is test hypotheses, and the testing of any hypothesis is much stronger if a variety of independent lines of evidence are available to support the conclusions reached.

What follows is a brief description of each of these four types of experiments.

The determine "experiment" is not really an experiment at all. The determine approach is to perform a basic characterization of the system or molecule at hand independent of any experimental manipulation whatsoever. Examples of this type of pursuit are determining the amino acid sequence of a protein, sequencing a genome, determining the crystal structure of an enzyme, or determining the structure of the DNA double helix. Determinations of this sort are not experiments in that no manipulation of the system is attempted--to do an experiment you tweak the system to see what happens. If you mutate a residue in a protein and see what effect that has on the structure, then you have done an experiment. The basic determination of the structure is not an experiment in and of itself.

Determintations are some of the most satisfying laboratory pursuits to undertake because these are the rare types of studies where definitive data can be obtained. An amino acid sequence is what it is--you get to use unambiguous words like "identical" (versus indistinguishable or similar) and "determined" (versus concluded or inferred) when describing gene and amino acid sequences. There's slightly more ambiguity in determining protein structures and anatomical structures, but in general this pales in comparison to the ambiguity of a conclusion made on the basis of an experimental manipulation. The down side of determinations is that, as a practical matter, they are viewed as boring unless they involve lots of expensive equipment. It's very difficult to get a grant review study section to recommend approval of a basic anatomical characterization, for example, because no experimental testing of a hypothesis is involved. In modern biomedical research, hypothesis testing is de rigueur. In rodent behavioral systems, which are the topic of this chapter, most of the basic behavioral characterization has alread been done. However, there is a growing recognition that more sophisticated and detailed basic behavioral characterizations, and the devleopment of new rodent behavioral models for human mental disorders, is necessary for the next stage of progress in this field.

Block, measure, and mimic are experiments, and they are all specific types of approaches to test different predictions of a hypothesis. For the following discussion we will take the simple case of testing the hypothesis "A causes C by activating B" (see Figure 13).

The mimic experiment tests the prediction that "if B causes C, then if I activate B artificially I should see C happen as a result." An example that we will return to later is: if I hypothesize that a particular protein kinase causes synaptic potentiation, then applying a drug that activates that protein kinase should elicit synaptic potentiation.

The mimic terminology arises from the fact that you are trying to mimic with a drug (etc.) an effect that occurs with some other stimulus, potentiation-inducing synaptic stimulation in this example. The principal limitation of the mimic experiment is that B may be able to cause C but that in reality A acts independently of B to cause the same effect. B causing C and A causing C may be true, true, and unrelated.

At the current state of understanding and experimental sophistication, mimic experiments are just about impossible to execute in the context of mammalian learning and memory. This is because an enormous amount of fundamental understanding of the system is necessary, along with the capacity for very subtle manipulation, in order for the experiment to work. For example, suppose I hypothesize that synaptic potentiation underlies learning. In theory, the mimic experiment is to put an electrode in the brain, cause synaptic potentiation, and then the animal will have an altered behavior identical to that caused by a training session. Of course, doing this experiment requires that I know exactly which synapses to potentiate so that I can selectively acheive the right behavioral output--this is beyond the level of understanding for essentially all mammalian behaviors at this point.

The measure experiment tests the prediction that "A should cause activation of B." Using our example of kinases in synaptic potentiation, the measure experiment should cause an increase in the activity of the kinase. This is, of course, determined by measuring the activity of the kinase as directly as possible, hence the measure terminology. The measure experiment has been applied in a variety of different ways in the memory field, ways that we will discuss at various points throughout the book including looking for anatomical, physiologic, and molecular changes in the nervous system in association with learning. The principal theoretical limitation of the measure experiment is that it is correlative. One can show that A causes activation of B, but that does not demonstrate that activation of B is necessary for C to occur.

Which brings us to the block experiment. The block experiment tests the prediction that "if I eliminate B, then A should not be able to cause C." In our working example, this means that a kinase inhibitor should block the ability of the potentiating stimulus to cause potentiation. At present, the vast majority of investigations into mechanisms of memory involve this approach, and we will make many references to this type of experiment throughout the book. Specific examples include anatomical lesions, drug infusion studies, and genetic manipulations. The principal theoretical limitation of the block experiment is that it does not distinguish whether activation of B is necessary for C, versus whether the activity of B is necessary for C. For example, suppose that B provides some tonic effect on C that is necessary for it to occur. Inhibiting B will block the production of effect C when in fact A never has any effect on B whatsoever. In behavioral terms for learning experiments, this is referred to as a performance deficit-- the animal is simply unable to execute the behavioral read-out necessary to exhibit the fact that they have learned.

In summary, then, the mimic experiment tests sufficiency, the block experiment tests necessity, and the measure experiment tests whether the event does in fact occur. Each type of experiment has its strengths and weaknesses. Positive outcomes in testing each of these three predictions for any hypothesis makes for clear, strong support of the hypothesis." (Sweatt 2003, p. 45-46)

Tuesday, March 3, 2009

Coming Soon: the Introduction of Transgenics into the Neuroscience of Cognition

Luigi Galvani

Here are some notes on Luigi Galvani that I've collected, mostly from Clarke and Jacyna's book Nineteenth-Century Origins of Neuroscience Concepts.

Luigi Galvani (1737-1798) was an Italian scientist interested in the role that electrical forces play in animal physiology. Toward the end of the 18th century, Galvani showed that applying electric current to the nerves of frog legs could get the legs to kick. Galvani was not the first to have shown that electric current could induce contractions in skeletal muscles. In fact, he was repeating experiments that had already been performed. But his interpretation of the experiments earned him his place in the history of science (Clarke and Jacyna 164). The word "galvanized" finds its origin in Galvani's name.

Galvani's work with frogs branched into work on other animals, which later, through the work of Emile du Bois-Reymond spawned the field of electrophysiology.

It appears that Galvani's discovery was interpreted against the background of the hollow nerve theory, which had been espoused by such distinguished scientists as Rene Descartes. According to the hollow nerve theory, nerves are like pipes that generate muscular actions by channeling fluids around the body. The "hollow nerve" theory can be traced back to Erasistratus (c.260 BCE) and was endorsed by Galen four centuries later (Clarke and Jacyna 160). It seems that Galvani did not question the hollow nerve theory. Instead, he questioned preceding theories of what flowed through the hollows, and how that flow created muscle actions.

Based on his own experimental results, Galvani claimed to demonstrate that animals run on a special kind of electricity, so-called "animal electricity". Animal electricity was supposed to be a kind of fluid in Galvani's mind. The flow of the fluid through the nerves was what accounted for muscle flexion. Galvani's position stood in stark contrast with competing views on muscle flexion. According to these other views, the muscle actions were caused by effervescence, explosion, or ethereal oscillation (Clarke and Jacyna 161). (Personally, I find the explosion theory most compelling...joke.) Though a sound physical theory of electricity had not yet been found, Galvani's work tipped the scales in favor of the electrical theory of nerve conduction.

Neuroscience was revolutionized as a result of Galvani's work, but the revolution did not happen overnight. Not until the mid-19th century did consistent interest in electrophysiology emerge. Though he had nothing nice to say about Galvani or his work, Emil du Bois-Reymond carried Galvani's torch.

An interesting epistemological point:

Alessandro Volta (1745-1827) famously challenged Galvani's claim that nerve actions were normally driven by electricity. According to Volta, Galvani had not shown that electrical stimulation generated muscle contractions by the natural causal pathway (Clarke and Jacyna 171). Rather, he had shown that the actions of the metals used to stimulate the nerves were effective. Galvani met Volta's challenge with subsequent stimulation experiments without metals. It is pretty standard fare in contemporary neurosciece textbooks to delimit a class of experiments that mimic normal neural processes (you can find this notion in David Sweatt's textbook and Yadin Dudai's textbook).

The basic epistemic point: If your methods don't to some extent approximate typical causal forces working on the nervous system, your capacity to generalize from your studies will be limited. I wonder if there are prior discussions of "mimic" experiments in the neuroscience literature...

Monday, March 2, 2009

The Concept

This is a history, philosophy, and sociology of neuroscience blog. These are the kinds of topics I'll discuss here:

1. Large-Scale Theories of the Brain
2. Epistemic Norms of Neuroscience
3. Historical Episodes in Neuroscience
4. Neurosemantics
5. Neuroinformatics
6. Bibliographic Analysis
7. Neuroscience Policy
8. Current Trends in Neuroscience

If you're interested, subscribe to my RSS.