Program

Abstract Submission

Deadline March 31st, 2024

7:30 AM

7:45 AM

8:00 AM

8:15 AM

8:30 AM

8:45 AM

9:00 AM

9:15 AM

9:30 AM

9:45 AM

10:00 AM

10:15 AM

10:30 AM

10:45 AM

11:00 AM

11:15 AM

11:30 AM

11:45 AM

12:00 PM

12:15 PM

12:30 PM

12:45 PM

1:00 PM

1:15 PM

1:30 PM

1:45 PM

2:00 PM

2:15 PM

2:30 PM

2:45 PM

3:00 PM

3:15 PM

3:30 PM

3:45 PM

4:00 PM

4:15 PM

4:30 PM

4:45 PM

5:00 PM

5:15 PM

5:30 PM

5:45 PM

6:00 PM

6:15 PM

6:30 PM

6:45 PM

7:00 PM

Registration/Check-in

7:30 AM - 7:45 AM

Continental Breakfast

7:45 AM - 8:15 AM

Opening Remarks

8:15 AM - 8:30 AM

Does size matter?

Erika Skoe 9:30 AM - 10:00 AM

Coffee Break, vendors + posters

10:00 AM - 10:30 AM

Lunch, vendors + posters

12:30 PM - 1:30 PM

Neonatal frequency-following responses

Carles Escera 1:30 PM - 2:30 PM

Welcome Reception

5:00 PM - 7:00 PM

Registration/Check-in

7:30 AM - 7:45 AM

Continental Breakfast

7:45 AM - 8:30 AM

Coffee Break, vendors + posters

10:00 AM - 10:30 AM

Lunch, vendors + posters

12:30 PM - 1:30 PM

Clinical Applications of the FFR for Older Listeners

Samira Anderson 4:00 PM - 5:00 PM

Creating an FFR Data Repository

Jennifer Krizman 5:00 PM - 5:30 PM

Roundtable Discussion

5:30 PM - 6:15 PM

Posters

6:15 PM - 7:00 PM

Registration/Check-in

7:30 AM - 8:00 AM

Walk to lunch

10:45 AM - 11:00 AM

Featured Speakers

Dr. Samira Anderson

University of Maryland

Samira Anderson is a Professor in the Department of Hearing and Speech Sciences at the University of Maryland. After practicing as a clinical audiologist for 26 years, she decided to pursue a Ph.D. degree in Auditory Neuroscience at Northwestern University to better understand the hearing difficulties experienced by her patients. She obtained her Ph.D. in December of 2012 and joined the faculty at the University of Maryland in 2013. Dr. Anderson’s work comprises three approaches to improving speech understanding in older adults: 1) investigate the neural basis of speech perception deficits in older adults, 2) investigate approaches that may induce neuroplasticity to reverse age-related deficits in auditory processing, and 3) investigate methods of enhancing communication outcomes in individuals who use hearing aids or cochlear implants, and her research program is supported by NIDCD and NIA.

Clinical Applications of the FFR for Older Listeners

 

For decades, Audiologists have relied on the audiogram as the gold standard for hearing loss diagnosis. The audiogram is most effective for differentiating between sensorineural and conductive hearing loss, but it falls short in providing the information needed to predict an individual’s ability to understand speech in complex listening environments. Growing awareness of pathologies or deficits beyond the cochlea has led to efforts to incorporate new clinical measures that are sensitive to these deficits. The frequency-following response (FFR) provides a measure of the auditory system’s temporal precision and can therefore be used to evaluate neural speech processing. This presentation will discuss clinical applications of the FFR in the older listener with respect to identifying sources of listening difficulties, assessing hearing aid benefits, and monitoring outcomes of intervention.

Dr. Fuh-Cherng Jeng

Ohio University

Dr. Fuh-Cherng Jeng is a professor and director of the Auditory Electrophysiology Laboratory at Ohio University. His research focuses on how speech and non-speech sounds are processed in neonates and adults via the study of auditory evoked potentials, together with the participants' behavioral responses when applicable, and lately reinforced with machine learning. 

Machine Learning and Frequency Following Responses: A Tutorial

The human frequency-following response (FFR) offers a valuable lens into the intricacies of auditory stimulus processing within the brain. This presentation serves as a guide, extending from basic principles to practical implementations of machine learning techniques applicable to FFR analysis. Delving into various supervised models such as linear regression, logistic regression, k-nearest neighbors, and support vector machines, alongside an exploration of the unsupervised realm with k-means clustering, this presentation will attempt to illuminate their applications and discuss the nuances of their utilization.

 Beyond these, we will navigate through an array of machine learning tools, encompassing Markov chains, dimensionality reduction, principal components analysis, non-negative matrix factorization, and neural networks. The talk emphasizes a nuanced understanding of each model’s applicability, pros, and cons, recognizing the pivotal role of factors like research questions, FFR recordings, target variables, and extracted features.

To enhance comprehension, a Python-based example project will be presented, showcasing the practical application of several discussed models. Drawing insights from a sample dataset featuring six FFR features and a target response label, this tutorial equips researchers with the knowledge needed to judiciously choose and apply machine learning methodologies in unraveling the mysteries embedded in human auditory processing.

Dr. Rachel Reetzke

Kennedy Krieger Institute

The Johns Hopkins University School of Medicine

Dr. Rachel Reetzke is an assistant professor and a certified and licensed speech-language pathologist at the Center for Autism Services, Science and Innovation at Kennedy Krieger Institute. She is also an assistant professor of Psychiatry and Behavioral Sciences at the Johns Hopkins University School of Medicine. Dr. Reetzke earned her Ph.D. in Communication Sciences and Disorders from the University of Texas at Austin. She then completed a postdoctoral fellowship at the University of California, Davis MIND Institute. Her current research program leverages behavioral, electrophysiological, and novel machine learning approaches to: (a) characterize early behavioral phenotypes and developmental trajectories and (b) identify early predictors of neurotypical and neurodivergent development in infants at elevated likelihood for and toddlers with autism. The long-term goals of this work are to: (a) inform the development of cost-effective, scalable, objective screening and outcome measures, and (b) elucidate optimal mechanistic targets and timing for early intervention. Dr. Reetzke is a 2023 recipient of the American Speech-Language-Hearing Association’s Early Career Contributions in Research Award. Dr. Reetzke’s research program has been funded by the U.S. Department of State Fulbright Program, the American Speech-Language-Hearing Foundation, the United States Department of Defense, the Brain and Behavior Research Foundation, the Simons Foundation Autism Research Initiative, and the National Institutes of Health.

What does the FFR reveal about speech processing in autism? New insights from a prospective longitudinal study of infants at elevated familial likelihood

Autism spectrum disorder (ASD) is associated with atypical speech processing early in life, which can have negative cascading effects on language development, one of the strongest predictors of long-term outcomes. While the frequency-following response (FFR)—a sound-evoked potential that reflects synchronous neural activity along the auditory pathway—has revealed speech processing differences in children and adolescents on the autism spectrum, what has yet to be established is the extent to which such differences are present during the ASD prodromal period. Investigating speech-evoked FFRs in infant siblings of children with ASD (infants at elevated familial likelihood for autism [EL infants]) has the potential to reveal the earliest time window when neural function related to speech processing may begin to diverge. In this talk, I will present results from a prospective, longitudinal study characterizing the developmental time course and the predictive value of the FFR to speech sounds in EL infants compared to infants with a typical likelihood for developing ASD. Findings reveal that the FFR may be sensitive to subtle speech processing differences in EL infants within the first year of life, well before the emergence of behavioral precursors. This study will set the stage for a broader discussion of the utility of the FFR for characterizing speech processing differences and in predicting distal language and social communication outcomes across the range of heterogeneity associated with the autism phenotype.

Dr. Srivatsun Sadagopan

University of Pittsburgh

Dr. Sadagopan obtained an undergraduate degree in engineering from the Indian Institute of Technology, Kharagpur, India. He then earned his PhD in Neuroscience from Johns Hopkins University, where he started studying the auditory system. After a couple of postdocs in the visual system, he returned to working on the auditory system when he joined the faculty of the University of Pittsburgh in 2015, where he is currently an Associate Professor of Neurobiology. In 2019, he was awarded the Geraldine Dietz Fox Young Investigator Award by the Association for Research in Otolaryngology. Dr. Sadagopan's research program investigates the computational principles and cortical neural circuits underlying the perception of communication sounds such as animal calls and human speech. Combining theoretical, behavioral, electrophysiological, and imaging approaches, and using guinea pigs as an animal model, his lab studies the mechanisms by which the auditory system extracts behaviorally relevant information from communication sounds in realistic listening environments and uses this information to guide behavior.

What we can learn about auditory circuits from responses that follow stimulus frequency – and those that do not.

 

Frequency-following responses (FFRs) to speech stimuli are non-invasive and easily deployable measurements of speech encoding integrity in the auditory system that show great potential as a biomarker for many pathological conditions. But because fundamental questions remain with respect to how the activity and modulation of underlying neural circuits affect speech FFRs, interpreting and attributing changes in FFRs to particular circuit elements is challenging. As a first step towards gaining more biological insight into these issues, our lab has been involved in a collaborative effort to determine the extent to which speech FFRs and their response properties are conserved in an animal model, setting the stage for future mechanistic experiments. In this talk, I will present results from the guinea pig, a rodent with excellent low-frequency hearing, that demonstrate that FFRs to speech sounds show remarkable similarities with those recorded in humans and monkeys. I will discuss how stimulus fundamental frequency, stimulus statistics, arousal state, and manipulation of auditory cortical activity affect FFR amplitudes and fidelity in this species. Using intracranial translaminar recordings, I will discuss our initial efforts to map scalp recordings to neural ensemble activity. Finally, I will present early studies on non-stimulus-following intrinsic oscillatory activity in the auditory cortex, and discuss what these may reveal about underlying circuits.