By Rachel Henderson
If you are looking for a specific document on a desk strewn with documents that are all printed on white paper, it may take a while to find as you visually scan each one. But if the one you are looking for is printed on bright green paper, it will stand out immediately from the others. A new study from Robert Knight’s lab at the University of California, Berkeley has identified areas of the human brain involved in these two types of visual search — termed “serial search” and “pop-out”, respectively — by recording electrical activity directly from people’s brains.
Knight is a professor of psychology and member of the Helen Wills Neuroscience Institute. The study, published in the September issue of the Journal of Cognitive Neuroscience, was led by Berkeley Neuroscience PhD alum Katarina Slama when she was a graduate student in Knight’s lab. Their team collaborated with clinicians who were using implanted electrodes to monitor the brain activity of patients while they awaited surgery for epilepsy.
Patient volunteers performed serial and pop-out visual search tasks while their brain activity was recorded. Using this technique, which provides a high degree of resolution and coverage across the brain, the researchers made several new discoveries about the brain areas involved in these types of visual search. Their findings increase our understanding of the neural mechanisms underlying visual search, and more broadly, human attention.
Slama graduated in 2019 and is now a data scientist at Intuit, Inc. Read our Q&A with her below to learn more about the study and its implications, as well as her current work.
Q: What motivated you to do this study?
A: When I first came to Berkeley, I was broadly interested in the concept of attention. Attentional impairments are a central feature of many neurological and psychiatric conditions, so it’s important that we understand it for that reason. Even more importantly, for all humans, how we allocate our attention determines a lot about how we live our lives. Having the ability to choose what we attend to, and to sustain focused attention, is critical to being successful at anything. Just ask an expert meditator, surgeon, chess player, or ski jumper. At the same time, we must retain the ability to flexibly notice salient objects and events around us: This can be a matter of life and death — consider a task like driving a car.
I was very lucky that my advisor, Professor Bob Knight, was supportive of my pursuing my fascination for attention. He suggested that I use a classical attention paradigm, visual search, in combination with a rare type of neural data, intracranial recordings. [To explain intracranial recordings,] some people with epilepsy elect to have surgery to control their seizures. As part of this procedure, surgeons will insert voltage recording electrodes into the brain to map out the epileptic regions for resection and identify normal brain tissue to be preserved. During the time that the electrodes are implanted in the brain (usually between a few days and a week), some patients volunteer their time to take part in neuroscience studies, like ours, involving playing short computer games designed to test some aspect of brain function.
Our study targets two components of attention: (1) deliberate, voluntary attention allocation and (2) involuntary capture of attention by a salient object.
Q: Briefly explain how you and your colleagues did the study, and the main findings.
A: Our experiment distills two fundamental components of attention: voluntary attention allocation versus involuntary attention capture. To target these processes, we used a classical visual search task, eliciting serial search and pop-out, respectively. Serial search [referred to as “search” here] is thought to represent voluntary, deliberate attention, allocated in sequence to candidate targets. Pop-out, in contrast, is thought to reflect involuntary attention allocation operating in parallel across visual space.
An example of the search condition that many people are familiar with is the famous “Where’s Waldo” cartoon. For the pop-out condition, there are lots of visual occurrences that involuntarily capture our attention all the time: Think of a fire truck with blinking lights, colorful holiday decorations or, even more common to our daily experience: ads on websites.
We wanted to know how visual search and pop-out are represented in the human brain. We set about answering this question about search and pop-out by using intracranial recordings in humans. We collected an unusually large dataset of 23 participants, which enabled us to achieve an unprecedented combination of spatiotemporal resolution and coverage across cortical and subcortical structures.
In the experiment, patients searched for a target triangle of a given color and orientation among distractors. In the pop-out condition, the target triangle was of a different color and orientation than the distractors. The color difference makes the target triangle “pop out” among the distractors — hence the name “pop-out”. In the search condition, the target triangle was of the same color as the distractors, so the patient had to search based on orientation alone, leading to a deliberate, serial search process, where each triangle had to be inspected one by one.
It was previously known that the frontal and parietal cortices are key players in visual search and pop-out. As expected, we found strong engagement in these regions. But importantly, we also saw that the medial temporal lobe, an area traditionally thought to mainly be involved with navigation and memory, is engaged in these processes as well, on par with the frontal and parietal cortices.
We replicate previous work demonstrating nearly complete overlap in neural engagement across cortical regions in search and pop-out: For the most part, the two conditions rely on shared neural infrastructure. In addition, we found different amounts of engagement in subregions of lateral frontal cortex, with the greatest proportional involvement in ventral areas and the least in dorsal areas.
In terms of differences between the two tasks, we confirmed previous results from fMRI showing very strong engagement in the right lateral frontal cortex in search. We found sites showing stronger activity in pop-out than in search in a distributed set of sites across the brain, including the frontal cortex. This last result is at odds with the view that pop-out is implemented in the parietal cortex, or in low-level cortex, alone.
Q: What was the most surprising outcome of this study?
A: The medial temporal lobe (MTL) is not part of the dominant model for how top-down and bottom-up attention is implemented in the brain, and this study is part of a literature suggesting that this might need to change. I would say that that was the most surprising part. Of course, there will be scientists who are not all that surprised by this result: There is a broader literature suggesting that the MTL may be involved in search. In particular, entorhinal neuron engagement has been shown in naturalistic search tasks in animals. Also, psychologists studying visual search in humans using pure behavioral paradigms have conjectured that the MTL might be involved due to the memory requirements of the search process.
Q: What are some of the challenges and advantages of intracranial recordings over other methods to study the human brain?
A: Intracranial recordings have several advantages over other recording methods in humans, including superior spatiotemporal resolution and higher signal-to-noise ratio. Compared to fMRI, the fact that intracranial recordings measure voltages from neural populations yields improved temporal resolution. Compared to EEG, the fact that the sensors are placed intracranially at the source of the activity gives improved spatial resolution.
One of the challenges is that we of course don’t get to choose the locations of the electrodes, since they are placed solely based on clinical considerations. This means that the coverage will vary from patient to patient. This is one of the reasons why we collected such a large dataset, which enabled us to get coverage across all major cortical regions as well as the MTL.
A second challenge, which is also one of the most exciting parts of working with intracranial recordings, is that it’s a relatively recent neural recording method for humans, so analysis methodologies and software packages are very much under active development. There isn’t a single correct way to address a scientific question with intracranial recordings. You have to give your data a lot of thought and a lot of work, starting from preprocessing and artifact removal all the way through to inferential statistics and actually answering your questions. In this respect, I was very lucky to be at Cal, which has a vibrant data science community and especially to have had an outstanding collaborator from the Statistics Department, Sujayam Saha.
Finally, it can be challenging to record the data in the hospital ICU setting. I owe a lot of thanks to my co-authors and labmates who were instrumental in facilitating this data collection, and most of all to the patients for volunteering their time to make this research possible.
Q: How does this research add to our understanding of the human brain?
A: The three main findings, which contribute a novel understanding of the neural underpinnings of visual search in humans, are: 1) the medial temporal lobe is strongly engaged in visual search and pop-out, on par with frontal and parietal cortex; 2) frontal subregions show different degrees of engagement in search with ventral subregions most strongly engaged; 3) visual pop-out engages a broad set of sites across cortex, not merely parietal or visual cortex.
Q: What question did this study raise that you would particularly like to see answered?
A: I’m curious about the extent to which these effects (MTL engagement, frontal sub-regional effects, and distributed pop-out effects) might generalize to other sensory domains and even internal attention. In the field of AI, researchers are also interested in the phenomenon of “attention”, and it seems to correspond loosely to our concept of attention in neuroscience in that it is also about prioritizing some information over other information. The cool thing about attention in AI is that it’s modality-agnostic. As long as the information can be encoded as a vector, you can apply “attention” to it whether that information had originally been text or images or audio. The cool thing about attention in neuroscience is, of course, that it’s the real attention, the attention that humans do. But I hope that neuroscientists will also start to discover more modality-general mechanisms, and attention is a great candidate for that. Concretely, there’s a fairly straightforward extension of visual pop-out to the auditory domain, where you have a tone of an anomalous frequency follow a sequence of tones of the same frequency: I would love to see the relationships between auditory and visual pop-out explored using intracranial recordings.
I am even more intrigued about how our results might relate to internally directed attention. Conceptually, it seems like there might be a link between serial search in vision and the search process of your memories that you might conduct when you try to recall the name of someone that you see infrequently. A pop-out in the context of internal attention might be something like an involuntary memory or an impulse. These questions are much more challenging to study though, since it’s hard to precisely connect the timepoints when internal events occur to events in neural recordings.
Q: What are you doing in your job now, and how does it relate to your graduate work?
A: I am a data scientist on the Security, Risk and Fraud team at Intuit, the tech company behind TurboTax, QuickBooks, Mint, and Credit Karma. I apply data science and AI methods to help secure the ecosystem and protect user data.
I have been working in quantitative behavioral science in some form or other for more than a decade now. Neither my PhD work nor my current work is an exception to that. The field of data science really grew and formed its own identity while I was in graduate school, and UC Berkeley has been a leader in that process. My interest in data science grew over the course of my PhD along with the field itself, and I benefited deeply from the vibrant data science, statistics, and AI communities at Cal.
I particularly enjoyed my close collaboration with my co-author, Sujayam Saha, who was a PhD candidate in Statistics at the time and is now working as a data scientist at Google. As PhD candidates at Cal, we experimented with neural time-series data and considered different statistical approaches to answering our scientific questions. We chose a pragmatic and intuitive approach for most of our hypotheses — the permutation test — which makes minimal assumptions about the underlying data distributions. We ended up adapting (innovating on) an existing cluster-based permutation test approach to address our scientific questions.
Q: What do you enjoy about your work?
A: I enjoy working with numbers and solving problems. That was also my favorite part of my PhD work. Similarly to my PhD work (and my research work before that), my work is all about applying math and data science tools to gain insight into human behavior. The difference is that now the scale of the data is much larger and the impact of the work is more immediate.
This work was funded by grants to Robert Knight from the NINDS (R37 NS21135) and the NIMH Conte Center 1P50 MH109529-01.
- Read the paper: Intracranial Recordings Demonstrate Both Cortical and Medial Temporal Lobe Engagement in Visual Search in Humans by S. J. Katarina Slama et al. J Cogn Neurosci 2021
- Knight lab
- Slama’s website