Letting the data speak for itself: PhD alum Alex Huth models language representation in the brain

March 6, 2019

“If you can get enough hours of data, enough data points, then we can let the data tell us what kind of features are important, instead of being forced to guess.”

Alex Huth, PhD program alum (entering class of 2008)

Alex Huth, Assistant Professor of Computer Science and Neuroscience at the University of Texas at Austin(link is external), has an ambitious goal — to scan the brains of individual people for hundreds of hours to get fMRI data sets large enough to produce an accurate and detailed model of how language is represented in the brain. 

As a child, Huth’s interest in the brain was sparked by another type of data — the android character, Data(link is external), on the television series Star Trek: The Next Generation. Huth’s fascination with the artificial intelligence that would be needed to produce humanoid machines like Data led him to study neuroscience as an undergraduate at Caltech.


Huth (right) with his first graduate student Shailee Jain and undergraduate Haley Connelly, building the first computer for use in his lab at UT Austin.

Huth began doing research in the lab of Christof Koch(link is external) at Caltech, which he describes as a pivotal experience, making him more confident and excited about his studies. Huth continued in the Koch lab as a Master’s student, studying visual processing and decision-making while earning a MS in Computation and Neural Systems.

While at Caltech, Huth attended a talk by Berkeley Neuroscience professor Jack Gallant(link is external), which he describes as a “complete eye-opening experience.” Excited by Gallant’s approach, he applied to Berkeley and ended up doing his PhD in Gallant’s lab. Huth began his work on language processing as a PhD student, and continued it as a postdoc in the Gallant lab.

Now in his second year as a faculty member, Huth combines approaches from neuroscience, computer science, mathematics, and linguistics to build complex models of how language and meaning is represented in the brain.

Read the following Q&A with Huth to learn more about the research questions and approaches that drive him, what it’s like to be a new faculty member, and why he loved being at Berkeley. This interview has been edited for brevity.

Rachel Henderson: How did you first become interested in neuroscience?

Alex Huth: I was really obsessed with Star Trek: The Next Generation when I was a kid. I was in love with Data [the android], I thought he was fascinating. I loved the idea of artificial intelligence and this idea that you could build a machine that would think like a human.

In early college, I was digging into this and learning more about the field of AI. I was kind of depressed at the state of where things were. We were just nowhere near understanding how humans work well enough to build a computer that thinks like us.

So I got interested in neuroscience as sort of a stepping stone towards AI, with the thought that if we can understand how humans work, how we think, then maybe we could build a computer that does the same kind of thing.

That was sort of my intro to neuroscience, but as I got deeper into the neuroscience world, I definitely got to care more about specific neuroscience problems.

RH: Did you do undergraduate research?

AH: I did. I went to Caltech for my undergrad. I was really lucky, they have this great program called the Summer Undergraduate Research Fellowship(link is external). I did that the summer after my junior year in a neuroscience lab. I really enjoyed it — it clicked for me.

I had a sort of hard time as an undergrad. I was not doing super well in all my classes, and I was kind of like — why am I doing all this? I’m not enjoying all the stuff I’m doing. This is kind of a common complaint for kids who go there, everyone was the smartest kid in their high school, everyone is really into science and math. Then you go to this place where you’re very far from being the smartest person, and science and math, it turns out, are really hard (laughs).

So that was just kind of depressing. But then when I started working in the lab, everything just kind of clicked. It was like, this is really fun, I’m excited by this, I’m interested in what I’m doing, and I felt like I was good at what I was doing. It made me want to learn more, because by learning more I could do more science. So that was really an eye-opening experience for me that was important.

RH: What brought you to Berkeley to do your PhD?

AH: It was kind of a funny happenstance thing. I applied to a couple grad schools, all based on professors who I wanted to work with. A few months before I applied, the professor who ended up being my PI, Jack Gallant, had come and given a talk at Caltech.

The talk was just this complete eye-opening experience. It was such a different way of thinking about science and doing science that I got insanely excited about it. I thought, this feels right. This feels like something better than the way that a lot of science is done. I just got really excited about that, so I applied.

RH: What in particular was so different for you about [Jack Gallant’s] approach to science?

AH: The key thing in his approach is that you don’t really design an experiment to answer one question. The experiments that he was talking about were visual experiments, so you just record from some neurons while showing [the subjects] a lot of different images. Then you use mathematical, computational models to figure out what it is in those images that makes the neurons fire.

It’s sort of like inductive, correlational, instead of being this very structured thing. It’s like — let’s take this very rich data and then figure out what’s going on inside it, instead of designing a sort of simple experiment.




Huth wearing a CaseForge head case (for stabilization during MRI, developed by Huth, Gallant, and James Gao at Berkeley) while being filmed for Korean Educational TV.

RH: Describe your dissertation work at Berkeley.

AH: I did a couple different experiments, but the main one was revolving around language representation in brain. I did this in really close collaboration with another grad student named Wendy de Heer(link is external) in Psychology.

The main experiment was that we had subjects laying in an MRI scanner while we recorded what their brains were doing with fMRI while they listened to a podcast, the Moth(link is external). It’s people telling true autobiographical stories. They’re really good stories, really interesting stories that are fascinating and fun to listen to. It made for a really enjoyable experiment. Most MRI experiments are god-awfully boring, you’re laying in a tube for two hours staring at blinking dots or whatever. But this was just like you close your eyes and listen to a story.

So it’s very natural data. It’s not controlled, we have no experiment built-in. There’s no simple analysis, there’s no experimental manipulation. But then what we do is we build these really complex computational models that predict the response in each part of their brains — predict how their brain is going to respond based on the content of the story, the words in the story.

We could then take a new story, and knowing the words, we could predict how the brain would respond to that story. Then we would check to see if we were right. We could tell that our models worked pretty well actually.

What this experiment told us is that we could look at which parts of brain respond to what kind of information, or represent what kind of information. We found that there are huge areas of the brain that respond to language, that represent especially meaning in language. More than that, these parts of the brain are sort of maps of where stuff is represented in the brain [and] end up being really consistent across people. So different people have more or less the same maps, which is really interesting.

I think this is a cool insight into how language is organized in the brain. Because this is all stuff that we’ve learned — none of this is innate. This is all about word meanings, and yet they’re always mapped in the same way in the brain.

RH: What was your experience doing your graduate work at the Helen Wills Neuroscience Institute?

AH: I loved Helen Wills. I thought it was a really fantastic program. Actually, my wife [Liberty Hamilton(link is external)] and I were in the same grad cohort, we were both Helen Wills people. We were dating when we started, so we actually applied to grad schools together and then both came to Berkeley.  We ended up getting married right after we graduated. That made for a tight bond with me in the community (laughs).



Huth with his wife, PhD program alum Liberty Hamilton.

Also, our cohort was phenomenal. I’m still close friends with people from my cohort. It was really a phenomenally friendly and accepting community of people that are also absurdly smart, and all working on really interesting stuff and sort of at the top of their field. It’s really great, and I couldn’t recommend it enough to anyone.

I feel like Helen Wills is an extremely accepting place, people embrace their differences. One thing that my wife and I talk about is that people were unabashedly weird, and interesting because they were weird. That’s something that we kind of miss here in Texas, I feel like there are fewer weird people. I don’t mean that in a pejorative way, I mean I love all my weird friends and the weird things that we would do.

RH: You mentioned that some of your graduate work was in collaboration with a grad student in Psychology. Did you feel like Berkeley was a good place to forge collaborations?

AH: For sure. The lab where I did my PhD and postdoc work, we had students from Neuroscience, Vision Science, and Bioengineering. We didn’t have any students from Psychology in that group, but there was another group that shared office space with us that had students from Psychology and Neuroscience.

So students from different programs were always together. We all worked together on things, that was always fun. Everyone had different friends, so that sort of enlarged the circle of friends that we would know.

I always felt like Neuroscience was kind of the core of the community, though. Helen Wills always had their stuff together. They had socials, and the neuroscience retreat was phenomenal, it was a really good time. So that was always kind of the core of that community — the Helen Wills people. [But] the collaborations were always there because there were always people around from other departments.

RH: You did a postdoc in the same lab, the Gallant Lab, after you graduated. What made you decide to do that?

AH: I was halfway done with this really big project that I started midway through my PhD, and it ended up taking 5-ish years from when we started collecting data to when the paper was actually published. That was a big enough thing that I didn’t want to leave that behind, and I didn’t want to give it short shrift.

I just really liked working in the lab and I was being productive, a lot of stuff was getting done. I felt like if I moved somewhere else I would have to take a step back in terms of starting over and learning new things. Learning new things is of course good, but I was on a roll and I wanted to keep going in that direction.

RH: How has the first year of being a new faculty member been?

AH: It’s exciting. It’s actually more exciting and kind of more fun than I expected. It’s a lot of work. You expect it to be hard, that there’s a million things pulling on your time. The thing that I didn’t really expect was actually how lonely it is. Because I went from this lab and community at Berkeley surrounded by people who were incredibly smart and working on same kind of problems that I was working on. So there were always people to bounce ideas off of constantly, things were kind of humming around me in that way. Even simple things — we’d go out to lunch every day somewhere in Berkeley.

But then starting as a new professor, for awhile I was kind of alone. I had one student my first semester. Now things are picking up, I have a few more people. But even as you get people, they’re new, they don’t know much yet about these things, they’re still learning about all this stuff in the field. It’s a very different feeling, it’s lonely in that way. But it is really exciting too.

RH: Is your wife also at UT Austin?

AH: Yes, she also got a faculty position here at UT Austin. That was a big factor in why we came here, because they offered us both tenure-track faculty jobs. She’s doing ECoG research here, recording people’s brains during neurosurgery. It’s really fun to go through this whole process together. We both know what stresses are on each other and can support each other through these things.

RH: Tell me about the work that’s currently going in your lab, and the big picture goals you have for your research.

AH: One of the things we did in the Gallant lab at Berkeley was really kind of different from the rest of the field. Instead of the standard MRI experiment [where] you take a bunch of people, scan each of them for maybe an hour, show them the same small set of stimuli, and average across these peoples’ brains to get some result.

What we did in the Gallant lab is take a smaller group of people, like 5-10 people, and scan each of them for many, many, many hours. In the paper(link is external) that I published, it spanned from maybe 8-10 hours at least in the scanner, from which we got 2-3 hours of usable data. That’s a lot of time. Then we’d be able to do all these fantastic analyses. You could build these high-dimensional models because we’d have all this data from each person’s brain.

So one of the things I’m trying to do in my new lab is to take that idea and push it to the extreme. So ask — how much data can we get on a single subject? My goal is to have 200 hours from one person. So scan the same person over and over again, probably something like once a week for a couple years. The cool thing is that this allows us to really change how we think about the models that we would fit to this data.


Huth presenting his work at the Computer
Vision and Pattern Recognition conference in
2018.

Using our old approach of getting 2-3 hours per subject, we were kind of stuck in this mode of guess and check. We’d always have to guess — maybe this kind of thing is represented in the brain, maybe this kind of feature is important. Then we’d build models using that feature, then test to see if they work well.

But if you can get enough hours of data, enough data points, then we can let the data tell us what kind of features are important, instead of being forced to guess. So we can sort of flip the equation around. I think that’s really exciting, that’s the pivot point that we’re trying to get toward.

It’s getting a really, really big data set, getting enough that we can learn directly from that data what the features are, and that will tell us something about how the brain processes language. That’s the main thrust of my lab.

RH: Your work is very cross-disciplinary — what do you particularly like about that?

AH: My work combines neuroscience with linguistics and computer science. We’re at the intersection of all these things. I really enjoy that, because I like methods and I like algorithms. That’s sort of the parts of neuroscience that really excite me — how do we analyze data, how do we solve specific problems using mathematical models. I think that leads to a natural combination with computer science — that’s all about designing algorithms to solve specific problems.

The combination with linguistics was less something that drove me, and more something that I was pushed towards, interestingly. When I started grad school, I wasn’t trying to work on language, but I started working on this project that was around language and I got really excited about it. I got really into studying how language works in the brain, I think it’s an outrageously interesting problem. It’s actually something we can make progress on, we can hope to understand things much better in a reasonable amount of time. It’s something that’s also uniquely human. I think it’s very central to what we are, how we work.

If we can figure how humans extract meaning from language, how we represent the meaning of a sentence, a paragraph, a story, that’s a huge step toward making computers also able to do that. That will enable all kinds of really interesting technologies in the near future, I think.

by Rachel Henderson(link is external)

Original Publication: March 6, 2019


Additional Information