«A Dissertation Presented to The Academic Faculty By Frank Lin In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the ...»
EXPERIENCE DEPENDENT CHANGES
IN THE AUDITORY CORTICAL REPRESENTATION
OF NATURAL SOUNDS
The Academic Faculty
In Partial Fulfillment
of the Requirements for the Degree
Doctor of Philosophy in the
School of Biomedical Engineering
Georgia Institute of Technology
EXPERIENCE DEPENDENT CHANGES
IN THE AUDITORY CORTICAL REPRESENTATION
OF NATURAL SOUNDS
Dr. Robert C. Liu, Advisor Dr. Joseph R. Manns Department of Biology Department of Psychology Emory University Emory University Dr. Christopher J. Rozell Dr. Garrett B. Stanley School of Electrical and Computer Engineering School of Biomedical Engineering Georgia Institute of Technology Georgia Institute of Technology Dr. Robert J. Butera School of Electrical and Computer Engineering Georgia Institute of Technology Date Approved: 04/12/2012
Others in the lab I want to also thank for their positive influence on my life listed in chronological order are Brian Kocher, Sara Freeman, Jason Miranda, Tatsuya Oishi, Katy Shepard, Liz Anne Amadei, Alex Dunlap, and Nina Banerjee. I would also like to thank the members of my thesis committee, Dr. Christopher Rozell, Dr. Garrett Stanley, i Dr. Joseph Manns, and Dr. Robert Butera, as each one of them has provided valuable insight throughout this process.
I also want to thank all my friends and family for their continual support and the willingness to listen. Most of all, I must thank my fiancée Lily Chan, my parents ChiunWen Lin and Su-Ching Lin, my brother Kenny Lin and his wife Yingli Zhu, my fiancée’s parents Maria and Julian Chan, and my dog Patches, for without them none of this would have been possible. Lily and Patches have always sat patiently listening to my constant talk of science and my work, yet they have always stuck by my side. Through all the turbulent times, they have kept me on the side of sanity. They have always lifted my spirits and always showed such belief in me that I would not have been able to achieve this without their support. I will always be indebted to them for the gratitude they showed me during this period.
3.3.3 Physiological state influences maintenance of cortical plasticity 78 3.3.4 Physiological state selectively reduces spontaneous activity 80 3.3.5 Physiological state influences long-term salience of pup calls 84
3.4.2 Role of maternal physiological state in plasticity maintenance 97 3.4.3 Decreased spontaneous activity involved in plasticity retention 99
Table 3.1: Proportion of call driven approaches dependent on test day and estrus 91 Table 4.
1: Relative or absolute error segregates similar population of SUs 130 Table 4.2: Thin and thick spiking SUs have different characteristics 146
Figure 3.9: Physiological state affects the long-term retention of plasticity 81 Figure 3.
10: Mothers call-responsive SUs show decreased spontaneous activity 83 Figure 3.11: Proportion of approaches towards the pup call speaker per animal 85 Figure 3.12: Proportion of approaches towards the pup call speaker per group 87 Figure 3.13: Differences in cortical plasticity reflected in call motivated exploration 89
Figure 4.12: QBest SUs do not differ in their SU call responses to pup and adult calls 138 Figure 4.
13: Animal groups differ in their QPoor, but not QBest SU call spike patterns 140
Vocal communication sounds are an important class of signals due to their role in social interaction, reproduction, and survival. However, it is still unclear how our auditory system detects and discriminates these sounds. The auditory cortex is thought to play a role in this process, because loss of this area can cause deficits in the vocalization discrimination in primates and speech comprehension in humans. In addition, the auditory cortex can undergo both rapid and long-term changes under classical and operant conditioning. But unlike these conditioning paradigms, the behavioral relevance of communication sounds are acquired through social interaction.
Thus, the question remains as to how the auditory cortex changes its neural representation of sounds that are socially acquired.
To address this question, we used a neuroethological model system, which allowed us to study the neural mechanisms underlying natural behavior. This model system consists of ultrasonic whistles emitted by pups and are thought to be communicative in nature. The calls can elicit a search and retrieval behavior in mothers, and are recognized as behaviorally relevant by mothers, but not pup-naïve virgins.
Therefore, this mouse ultrasonic communication system provides the opportunity to understand how the brain encodes natural sounds, and how the neural representation changes for vocalizations learned through social interaction.
In this dissertation, we recorded single neurons from the auditory cortex of fully awake, head-restrained mice, and began by assessing the changes in the cortical
virgins) recognize pup ultrasounds as behaviorally relevant. We then evaluated the role that pup experience and the maternal physiological state played in this cortical plasticity. Following these results, we explored the behavioral relevance of these neural changes using a two-alternative choice task. Finally, we developed a model to predict the response latency to natural sounds with the intent to define cortical neurons and their roles in processing acoustic features.
Our results show that the auditory cortex in animals that have had pup experience differ in their pup call-evoked inhibition, that the physiological changes associated with motherhood act to affect the long-term retention of this plasticity, and that these changes are correlated with call recognition behavior. In addition, we find that by using a model to predict the response to these vocalizations, there is a distinct subset of cortical neurons that preserve the peripheral mechanism for onset encoding, and a subset that represents a sound’s behavioral meaning. Taken together, this research emphasizes the importance of the primary auditory cortex in processing natural vocalizations, demonstrates how it changes to represent behavioral relevance, and creates a framework for studying functionally how these changes contribute to behavior.
Sitting at my desk, pondering this dissertation, I turn to my dog and call out his name, “Patches”, I say. He is sleeping, but his ears flicker, and he slowly moves his head to look at me. Just a few months prior when we first adopted him from the shelter, we had given him a new name, and at this time, he showed no recognition or behavioral response to the sound, “Patches”. Clearly then, the meaning of this sound has changed, but the question is, how does our brain account for this change?
Our sensory systems govern our perception of the world, and within our acoustic environment, sounds are continuous, resulting from physical vibrations. Yet, these vibrations can possess meaning, and what might be familiar to some can be unfamiliar to others. How our auditory system transforms these acoustic features into behaviorally relevant sounds and how we utilize these signals for localization, detection, and discrimination is not completely understood. It is amazing to think that our auditory system performs one, if not all of these operations in parallel and does so with both a speed and accuracy unmatched by any current computational approach.
In an attempt to understand how acoustic information gives rise to perception and behavior, there has been a strong push towards investigating the auditory cortex as a central component to this complex operation. It has numerous connections to higher cognitive areas, such as the prefrontal cortex (Fritz et al., 2010), changes its representation to facilitate learning and memory (Bieszczad and Weinberger, 2010;
on the expectation of sound (Jaramillo and Zador, 2011). While these studies have been instrumental in our understanding of auditory cortical function, they do not entirely explain how the auditory cortex encodes communication sounds, which are learned through social interaction. This difference in how we learn sounds is important because it can affect the resultant neural representation. In fact, a recent study demonstrated that the direction of plasticity depended on whether the training task used positive or negative reinforcement (David et al., 2012). Thus, the goal of this thesis is to understand the encoding of socially acquired vocalizations, and how this might subserve perception and behavior.
The subject matter below is intended to build a basic understanding of the methods and approaches used in this dissertation. Section 1.1 will briefly describe how a sound signal goes from the periphery to the auditory cortex. Then 1.2 will introduce communication sound processing, motivate neuroethology as a way to study the goal of this thesis, and in 1.3, detail the approach we have chosen. In 1.4, we will discuss what we currently know about experience dependent plasticity, and in 1.5, explain the importance of studying this in the awake animal. Finally, 1.6 will review the aims and goals of this dissertation.
1.1: The auditory system Within our environment, sounds are dynamic in nature, constantly changing as a function of time. As these mechanical waves arrive at our ears, our brain transforms
perceive and react to these cues. Yet, the understanding of how our auditory system represents a sound’s behavioral relevance and facilitates perception remains unsolved.
Below, we will briefly discuss how a sound is transformed into neural signals, how the physical characteristics of sounds are transmitted, and motivate the auditory cortex as a starting point to understand how the brain changes to encode a sound’s behavioral relevance.
1.1.1: Peripheral processing and the transduction of sounds From a physical perspective, sound is a mechanical wave created by a vibrating object and moves outward as particles in the medium (typically air) are compressed and spread apart. The characteristics of these vibrations in air and the way in which it physically affects our eardrum underlie how we describe a sound’s acoustic features.
This consists of attributes such as its frequency (number of vibrations of a particle in a fixed time), amplitude (energy in which the vibrating object imparts to the medium), and place in time (the start/duration of time in which this occurs).
As this sound wave travels through our auditory canal, it causes the eardrum to vibrate. The eardrum is connected to three bones in the middle ear called ossicles, and act to transfer this signal to the oval window. At this stage, these signals enter the cochlea, a fundamental first step in the transfer of mechanical to electrical energy (Moore, 1997). Here, as the oval window vibrates, the fluid within the cochlea moves resulting in the movement of the basilar membrane. A key feature in this process is that
intrinsic mechanical properties. At one end it is relatively narrow and stiff (high frequency responding), and at the other it is wider and less rigid (low frequency responding). This frequency place representation of the neural activity represents our auditory system’s transformation of spectral into spatial information and is called tonotopy. Downstream auditory nerve fibers reflect this tonotopic representation, and additionally encode the sound’s intensity, timing, and amplitude envelope using action potentials. It does this through both the rate of spiking discharges (Ruggero, 1992), and their timing (Frisina et al., 1985; Heil and Irvine, 1997). However, while the auditory periphery encodes acoustic features, it is unknown how these signals give rise to perceptual information (Rauschecker, 1998).
As this neural representation ascends the auditory system, these signals project upstream in a number of parallel pathways through the cochlear nuclei, superior olivary nuclei, inferior colliculus, and medial geniculate body of the thalamus (Kandel, 2000).
One might expect that with the many levels of processing between the periphery and cortex, neurons at higher levels perform increasingly complex operations to transform simple features into how we perceive sounds. Indeed, we know that the former is true when comparing the computations of the nerve fiber to the inferior colliculus. In addition to the preservation of tonotopy (Merzenich and Reid, 1974), the inferior colliculus also plays roles in sound localization (Benevento and Coleman, 1970;