Andrew Delamater has spent his career asking fundamental questions about how minds—human and nonhuman alike—learn from experience and how brains and artificial neural networks encode various forms of knowledge based on those experiences. A professor of experimental psychology at Brooklyn College and of psychology and neuroscience at the CUNY Graduate Center, Delamater is widely known for his influential research on associative learning, the neurobiological mechanisms of reward processing, and the computational processes that underlie behavior across species. His work blends traditional behaviorist methods with modern neurobiological tools and computational modeling approaches, helping to clarify how animals represent, update, and contextually use information about the world.

Most recently, Delamater co-authored (with Michael Domjan) an undergraduate textbook, The Essentials of Conditioning and Learning, and he concluded his tenure as editor-in-chief of the Journal of Experimental Psychology: Animal Learning and Cognition, where he helped shape the direction of the field by guiding rigorous, theory-driven empirical research to publication. Delamater reflected on his scientific journey, his perspective on the evolution of animal learning research, and the insights gained from years at the editorial helm of one of his discipline’s leading journals.

You started at Brooklyn College in 1994. What drew you to teaching psychology here?

I saw in Brooklyn College opportunities to develop a productive research program along with highly motivated students who possess the curiosity and eagerness to learn more about how the world works. In addition, I was attracted to the Psychology Department because it housed several key senior faculty members who not only expressed the types of academic values that I shared but who also had already developed inspiring careers of their own. I knew there was a great deal I could learn from each of them.

Your work has long combined traditional behaviorist methods with more recent neurobiological tools and computational modeling approaches to understanding basic learning processes. How has your thinking about what animals “know” or represent internally changed over the course of your career?

I’ve always found the question of knowledge representation a fascinating one to study scientifically. My adventure began with a simple question about how anticipations might influence perceptual experiences. If I’m thinking about something very sweet, for example, does that thought of sweetness make me perceive the beverage I happen to be consuming in the moment as being sweeter than it really is? There is plenty of evidence in nonhumans and humans alike that the answer to this question is yes. So, how does that work?

When I first came to Brooklyn College, I approached this sort of question at a purely psychological level of analysis. When something makes us “think” of sweetness, for instance, the simple answer is that we imagine something sweet and that activates in the mind’s eye some incipient perceptual representation of the thing that we previously experienced as being sweet. Thoughts can activate perceptual representations.

Over the course of my career, I have become increasingly interested in understanding neurobiological mechanisms of basic learning processes. We now have tools that allow us to measure neural activity patterns in various brain regions when we instruct a rodent to anticipate sugar water. We can then ask whether that pattern of activity resembles what occurs when the sugar water is itself presented.

But my interest in knowledge representation goes beyond neural activity patterns with sweet rewards. We also try to devise fairly simple neural network models that simulate how a brain can learn to anticipate something and how complexities in that network might give rise to more sophisticated, context-specific forms of knowledge (e.g., the word “apple” means one thing when we think about food, but something quite different when we think about the classic rock band The Beatles and Apple Corps.). How does the brain encode context-dependent forms of knowledge? We try to approach these questions at multiple levels of analysis—from neurons to behavior to perceptions to computational systems.

Comparative psychology asks us to look across species to understand learning and behavior. What have nonhuman animals taught you that you think is especially relevant to understanding human cognition?

Psychologists have long understood that very simple nonlinguistic processes—ones we are often unaware of—can go a long way toward explaining how our minds work. Humans use language to great effect, but research has shown time and again that humans are notoriously bad at explaining the origins of their own thoughts, memories, feelings, and emotions. Most likely, a host of underlying neurobiological and psychological processes are at work that are opaque to conscious awareness.

More concretely, someone who has experienced something extremely traumatic may partly re-experience that trauma when exposed to some triggering event in their environment. It could be obvious or subtle, but in both cases the underlying mechanism is very likely associative in origin, with accompanying neurobiological processes at work.

For me it’s an extremely interesting question to ask how far a simple associative neurobiological process can go in explaining seemingly complex forms of cognition. One of my current hobbies is to ask how a brain that consists of neurons that simply excite or inhibit one another can produce an ordered representation of number. Our brains do encode quantity, but it is not at all clear how. That question relates to another issue I am deeply interested in—the representation of time. It’s clear to me that language is not necessary for either of these types of cognition.

During your tenure as editor of the Journal of Experimental Psychology: Animal Learning and Cognition, what shifts or emerging trends in the field stood out to you most?

My field developed out of an interest in studying the evolution of mind—how various cognitive faculties may have emerged in different species throughout the animal kingdom. Progress has been complicated by our increasing appreciation of how difficult it is to measure underlying psychological and neurobiological mechanisms in a single species, let alone in many different ones. The field has developed increasingly sophisticated behavioral and neurobiological tools to uncover those mechanisms, and that gives me hope that significant progress will continue.

Beyond that, there is certainly more application of computational modeling to assist us in understanding how complex interacting systems like the brain explain behavioral and psychological phenomena. When the AI movement began in my field in the 1980s, I saw promise in early connectionist network approaches. After a period of enthusiasm, interest waned. But more recent successes in AI have shown scientists the power of so-called deep learning systems in explaining aspects of thought.

Some researchers are now using AI systems as new types of “participants” in experiments to see if those systems learn tasks in ways similar to humans and other animals. This research is in its infancy, but researchers are discovering that various forms of AI learn quite differently than humans. That means there needs to be more interaction among psychologists, neuroscientists, and computer scientists in devising biologically plausible systems. Currently, AI is largely produced through engineering approaches aimed at accomplishing functional tasks. Its true power may be realized when we can use these tools as reasonable models of how the mind and brain actually work. Then, many interesting and relevant applications may become possible.

For students at Brooklyn College who are interested in research careers, what questions about learning and cognition do you think are most exciting and relevant today?

I’ve always thought the field has been dominated by three basic questions: (1) What are the conditions necessary and sufficient for learning to take place? (2) What is the underlying content of that learning? and (3) How does that learning become translated into observable performance?

The first question is intensively studied in neuroscience. It attempts to identify the rules by which new connections between neurons get established—that is, what governs neuroplasticity in the brain.

The second question attempts to understand what aspects of the world become encoded in the brain as we learn. For example, sometimes behavior is automatic and habitual, and sometimes it is deliberate and goal-directed. These forms of behavioral control reflect distinct representational systems, and important questions concern how those systems interact to influence response choices. Moreover, other research increasingly points to how nonhuman animals acquire abstract representations of time, number, categorical information, and even other organisms and oneself. I expect these studies to continue to yield interesting discoveries.

Finally, the third question concerns how we use information encoded by the brain. This is closely related to decision-making—uncovering the rules we use in choice situations. Sometimes we “know” something is true but decide not to act on that knowledge. There are many interesting questions that arise from problems like that.

Overall, my advice to a student interested in research would be to learn enough about a discipline to identify a basic question that excites them, and then learn about the tools available to study that question. As students become more familiar with how the scientific process works, whether they answer their question definitively or not, this can lead to real insights, enthusiasm, and possibly a research career.