Zur Hauptnavigation / To main navigation

Zur Sekundärnavigation / To secondary navigation

Zum Inhalt dieser Seite / To the content of this page

Sekundärnavigation / Secondary navigation

Inhaltsbereich / Content

Cognitive Science Colloquium

Thursdays 15:30 to 17:00  - Building 57, Room 508

Wintersemester 2017-2018

November 16, 2017

Speaker: Dr. Garvin Brod (Center for Individual Development and Adaptive Education of Children at Risk (IDeA) & German Institute for International Educational Research (DIPF), Frankfurt), invited by Daniela Czernochowski)

Topic: Is asking students to make predictions an effective technique to activate prior knowledge and improve learning?

Abstract: It is well known that activating students' prior knowledge of a subject improves their learning performance. But what are simple techniques to reliably activate knowledge? And are these techniques equally effective in university students and school children? In two experiments, we tested whether asking university students and school children (grades 4–5) to make a prediction about a specific outcome is a viable technique to activate their knowledge and improve memory performance. We hypothesized that making a prediction would particularly benefit memory for events that violate expectancies, because making a wrong prediction should yield a surprise reaction, which in turn should boost memory for an event. The surprise reaction was measured using pupillometry. In short, findings were in line with our hypothesis but additionally pointed to age-related differences in the way that surprise is leveraged for learning. Implications for theory and educational practice will be discussed.


November 23, 2017

Speaker: Dr. Jana Jarecki (Basel University - Germany, invited by Tandra Ghose)

Topic: Class-conditional independence in human classification learning

Abstract: Humans excel in categorization. Yet from a computational standpoint, learning a novel probabilistic classification task involves severe computational challenges. The present paper investigates one way to address these challenges: assuming class-conditional independence of features. This feature in-dependence assumption simplifies the inference problem, allows for informed inferences about novel feature combinations, and performs robustly across different statistical environments. We designed a new Bayesian classification learning model (the dependence-independence structure and category learn-ing model, DISC-LM) that incorporates varying degrees of prior belief in class-conditional independence, learns whether or not independence holds, and adapts its behavior accordingly. Theoretical results from two simulation studies demonstrate that classification behavior can appear to start simple, yet adapt effectively to unexpected task structures. Two experiments — de-signed using optimal experimental design principles — were conducted with human learners. Classification decisions of the majority of participants were best accounted for by a version of the model with very high initial prior belief in class-conditional independence, before adapting to the true envi-ronmental structure. Class-conditional independence may be a strong and useful default assumption in category learning tasks.


Reading (optional): Jarecki, J. B., Meder, B., & Nelson, J. D. (2017). Naive and robust: class-conditional independence in human classification learning. Cognitive Science, 1–39. doi:10.1111/cogs.12496


December 07, 2017

Speaker: Dr. Rebecca Förster (Bielefeld University - Germany, invited by Tandra Ghose)

Topic: Innovative techniques for measuring visual attention

Abstract: Visual selective attention – the ability to preferably process task-relevant visual input reaching our eyes – is indispensable for purposeful adaptive behavior in our crowded visual world. Unsurprisingly, visual selective attention is a highly studied and influential topic across research fields reaching from basic research over clinical research up to robotics. I will present how interdisciplinary approaches and innovative techniques such as static and mobile eye tracking, head-mounted displays, and G-Sync technology can be used to reveal interesting new insights into the mechanisms of visual attention and may foster the development of efficient and convenient applications.


December 11, 2017 

SPECIAL TALK - (Monday, 13:00, Biulding 57, Room 215)

Speaker: Prof. Dr. Koichi Kise  (Dept. of Computer Science and Intelligent Systems – Osaka, Japan, invited  by  Andreas Dengel and Thomas Lachmann)

Topic: Quantified Reading and Learning for Sharing Experiences

Abstract: In my talk, I will present two topics. The first is an overview of our recently started project called "experiential supplement", which is to transfer human experiences by recording and processing them to be acceptable by others. The second is sensing technologies for producing experiential supplements in the context of learning. Because a basic activity of learning is reading, we also deal with the sensing of reading. Methods for quantifying the reading in terms of the number of read words, the period of reading, type of read documents, identifying read words are shown with experimental results. As for learning, we propose methods for estimating the English ability, confidence in answers to English questions, and estimating unknown words. The above are sensed by various sensors including eye trackers, EOG, EEG, and the first person vision.


December 11, 2017 

SPECIAL TALK - (Monday, 17:15, Biulding 42, Hörsaal 110 - Biologisches Kolloquium)

Speaker: Prof. Dr. Dr. h.c. Erwin Neher (MPI für biophysikalische Chemie, Göttingen; Nobelpreis für Medizin und Physiologie, 1991, invited by EcKhard Friauf)

Topic: Modulation of Short-term Plasticity at a Glutamatergic Synapse

Abstract: Synaptic Plasticity is held to be at the basis of most signal processing capabilities of the central nervous system. Long-term plasticity receives most attention by neuroscientists, since it underlies learning and memory. Short-term plasticity (STP), on the other hand, is not less important, since it mediates basic signal processing tasks, such as filtering, gain control, adaptation, and many more. My laboratory has studied STP at the Calyx of Held, a glutamatergic nerve terminal in the auditory pathway, which is large enough to be voltage-clamped in the ‘whole-cell mode’, using patch pipettes. STP is highly modulated by second messengers, such as Ca++ and diacylglycerol. In particular, it was shown that such modulators accelerate a process called ‘superpriming’, a slow transition of release-ready vesicles from a ‘normally primed’ state to a faster, ‘superprimed’ one (Lee et al. 2013; PNAS 110, 15079). Recently, we could demonstrate that this same process also mediates Post-Tetanic Potentiation, a medium-term form of synaptic plasticity (Taschenberger et al., 2016; PNAS 113, E4548-57). These findings will be discussed in the framework of literature data on various forms of short-term plasticity.


December 14, 2017

Speaker: Dr. Pieter Moors (Leuven University - Belgium, invited by Sven Panis)

Topic: Processing invisible stimuli during continuous flash suppression: stimulus fractionation vs. stimulus integration

Abstract: Continuous flash suppression (CFS) is a perceptual suppression technique which was introduced by Tsuchiya and Koch (2005) about ten years ago. It relies on the phenomenon of binocular rivalry where dissimilar visual input presented to both eyes leads to perceptual alternations of the stimuli presented to each eye. In CFS, an ensemble of rapidly flickering geometrical patterns is presented to one eye, whilst another stimulus is presented to the other eye, yielding prolonged perceptual suppression of that stimulus. Presented as a highly effective suppression technique, researchers readily started seeking for the boundaries of unconscious processing under such deep perceptual suppression. A particularly promising paradigm proved to be breaking CFS, in which the time it takes for the initially suppressed stimulus to enter awareness, is measured. A series of studies was published from which the converging conclusion seemed to be that the perceptually suppressed stimulus was processed in a fully integrated manner, up to the semantic level. For example, semantic congruency violations in sentences could be detected or arithmetic operations could be performed on invisible stimuli. In this talk, I will present three different studies, all challenging the stimulus integration account. Based on our findings, as well as those of other authors, we propose that stimuli suppressed through CFS are represented in a fractionated way, and processing is limited to elementary parts of the stimulus. We outline some predictions such a model makes, and evidence consistent with it.



January 18, 2018

Speaker: Dr. Christoph Scheepers (Glasgow University - UK, invited by Leigh Fernandez)

Topic: Pupillometric work on emotional resonance in L1 vs. L2 - Pupil dilation as an indicator of reduced emotional resonance in one’s second language

A number of behavioural and physiological studies suggests that late bilinguals ‘feel less’ in their second (L2) as opposed to their first language (L1) – a phenomenon dubbed reduced emotional resonance in L2 (e.g.,  Pavlenko, 2006; Dewaele, 2010). However, few studies to date have
carefully controlled for participants’ proficiency in L2 or variables affecting word recognition (e.g., length, frequency, etc.). The present pupillometry experiments were designed to overcome these shortcomings.

In Experiment 1, 32 Finnish-English and 32 German-English late bilinguals (all highly proficient in English) were tested both in their first language (L1) and in English (L2). An additional control group (32 English monolinguals) was tested only in English. In each language version of the experiment, we presented 30 high-arousal (e.g. “alarm”) and 30 low-arousal (e.g., “swamp”) words, alongside 30 emotionally neutral distractors. Word length, frequency, valence, and abstractness were controlled for both by design and analytically. Participants were shown the stimuli while their pupillary responses were continuously monitored (eye-tracking). The experiment confirmed reliably enhanced pupil-dilation in response to high- vs. low-arousal words, but only when participants were tested in their respective L1, and in spite of being able to recognise the words in L2.

In Experiment 2, 240 English words (80 high-, 80 low-arousal, and 80 distractors, carefully matched on a number of lexical variables) was presented to 116 participants from various language backgrounds (92 bilinguals with English as L2, and 24 monolingual English speakers). All participants were pre-assessed in terms of English proficiency (LexTALE). Again, participants’ pupillary responses were continuously monitored during the main task. There was no difference in pupillary responses to high- vs. low-arousal words in bilinguals (English L2), but clear pupillary effects in English monolinguals
(English L1). Importantly, this word type * group interaction remained significant even when differences in English proficiency were analytically controlled for. We conclude that reduced emotional resonance in L2 is real, and that it is not due to word recognition difficulties or differences in language proficiency.




January 25, 2018

Speaker: Prof. Dr. Rosana Tristão (University of Brasília - Brazil, invited by Thomas Lachmann)

Topic: Auditory Processing Disorder and Cognitive Profile in Children with Specific Learning Disorder

Abstract: We have been studying the development of auditory processing (AP) in infants and children with neurodevelopmental disorders due to different causes as Down syndrome and prematurity. In this presentation I will focus on AP disorder, its relation to specific learning disorder (SLD) in children and its the impact over cognitive profile and visuomotor skills. We investigated 25 children (7-14-years-old) through intelligence and visuomotor tests and audiologic evaluation that encompassed auditory threshold; brainstem auditory evoked response (BERA), event related potentials (ERP) P3/N2; behavioral auditory processing tests (APE): dichotic digits (DD), speech in noise (SN), sound localization (LOC), staggered spondaic words (SSW). Multiple linear regressions were used, and effects were found among AP disorder and WISC tests, IQs and indexes and visuomotor performance. We concluded that children with altered auditory processing presented a specific cognitive profile including lower verbal and spatial reasoning performance that is sensitive to parental education level and they should go through complete multimodal examination for better investigation of their specificities.



February 01, 2018

Speaker: Dr. Evan Kidd (MPI for Psycholinguistics - Australia National University, invited by Shanley Allen)

Topic: Individual differences in language acquisition

Abstract: Language acquisition is a developmental process categorised by significant yet stable individual differences. While we should expect individual differences to predict growth within domains (e.g., vocabulary at 12 months predicting vocabulary at 24 months), cross-domain predictive relations are particularly insightful because they can reveal important insights into the process of acquisition, serving to constrain our theoretical models by revealing patterns of representation and drivers of developmental change across time.  In this talk I will discuss an ongoing individual difference project being conducted in the ANU Language Lab (https://anulanguagelab.wordpress.com/).  The Canberra Longitudinal Child Language Project is a large-scale longitudinal individual differences study of children’s language processing. The study is tracking children’s language processing skills across time and linking them to their subsequent language acquisition, with the aim of moving towards more dynamic mechanistic explanations of the acquisition process. In this talk I will discuss the initial phase the project, which investigated how children’s segmentation skills relate to late vocabulary development. Using ERPs, we found robust individual differences 9-month-old children’s ability to extract words from running speech, which subsequently predicted vocabulary development and the children’s ability to learn novel labels.



February 08, 2018

Speaker: Prof. Dr. Sonja A. Kotz (Maastricht University - The Netherlands & Max Planck Institute for Human Cognitive and Brain Sciences - Germany, invited by Patricia Wesseling)

Topic: Multimodal emotional speech perception 

Abstract: Social interactions rely on multiple verbal and non-verbal information sources and their interaction. Crucially, in such communicative interactions we can obtain information about the current emotional state of others (‘what’) but also about the timing of these information sources (‘when’). However, the perception and integration of multiple emotion expressions is prone to environmental noise and may be influenced by a specific situational context or learned knowledge. In our work on the temporal and neural correlates of multimodal emotion expressions we address a number of questions by means of ERPs and fMRI within a predictive coding framework. In my talk I will focus on the following questions: (1) How do we integrate verbal and non-verbal emotion expressions; (2) How does noise affect the integration of multiple emotion expressions; (3) How do cognitive demands impact the processing of multimodal emotion expressions; (4) How do we resolve interferences between verbal and non-verbal emotion expressions?




Sommersemester 2017

April 27, 2017

Speaker: Radha Nila Meghanathan (Leuven University, invited by Thomas lachmann)

Topic: Memory accumulation across sequential eye movements and related electrical brain activity

Abstract: Visual short term memory for items presented at fixation has been studied extensively informing us about memory capacity for features and objects, the fidelity of accumulated memory and the neural correlates of memory load. However, during free viewing, information is accumulated in working memory across sequences of fixations and saccades. We attempted to understand accumulation of memory across sequential eye movements. We proposed that memory load would be reflected during fixation intervals in electrical brain activity (EEG). To find this EEG correlate of memory accumulation, we conducted a combined multiple target visual search- change detection experiment with simultaneous eye movement and EEG recording. Participants were asked to search for and memorize the orientations of 3, 4 or 5 targets in a visual search display in order to perform a subsequent change detection task where one of the targets changed orientation in half of the cases. We studied eye movement properties, pupil size, fixation related brain potentials and EEG of participants during the task. In my talk, I present the analyses we performed, the obstacles we faced there in and the solutions we found, the results that followed, and also discuss our new understanding of working memory and information accumulation.


May 11, 2017

Speaker: Katherine Messenger (Warwick University, invited by Shanley Allen)

Topic: The persistence of priming: Exploring long-lasting syntactic priming effects in children and adults

Abstract: Syntactic priming, the unconscious repetition of syntactic structure across speakers and utterances, has been a key method in demonstrating the psychological reality of abstract syntactic representations that adults recruit in their language processing. Syntactic priming has therefore been applied to test children's knowledge of syntactic structures but more recently it has been framed as a mechanism that can also explain how these structures are acquired. This theory has been instantiated in computational models but support from behavioural evidence is still needed. In this talk I will present research investigating whether syntactic priming effects in children (and adults) are indicative of language learning.


May 18, 2017

Speaker: Bertram Opitz (Surrey University - UK, invited by Thomas Lachmann)

Topic: The mysteries of second language acquisition: a neuroscience perspective

Abstract: One of the most intense debates in second language acquisition regards the critical period hypothesis. This hypothesis claims that there is a critical period during someone's development that enables the acquisition of any human language. Based on research utilising an artificial language learning paradigm I will demonstrate that the same cognitive processes and highly similar brain regions are involved in first and second language acquisition, at least in the areas of the acquisition of syntax, orthography and emotional semantics. I will also demonstrate that cognitive and environmental constraints of the learning process determine individual differences in second language learning.


June 08, 2017

Speaker: Yaïr Pinto (Amsterdam University, invited by Thomas Schimidt)

Topic: What can permanent and temporary split-brain teach us about conscious unity?

Abstract: A healthy human brain only creates one conscious agent. In other words, under normal circumstances consciousness is unified. However, the brain is made up of many, semi-independent modules. So how is this conscious unity possible? Current leading consciousness theories differ on the answer to this question, but intuitively, it seems that informational integration between modules is the key. In the current talk, I will present data that challenges this intuition, and suggests that even without massive communication between modules conscious unity can persist. I will discuss why our intuition may be mistaken, and I will present alternative explanations of conscious unity.


June 22, 2017

Speaker: Norbert Jaušovec (Maribor University - Slovenia, invited by Saskia Jaarsveld)

Topic: Increasing intelligence

Abstract: The “Nürnberger Trichter” – a magic funnel used to pour knowledge, expertise and wisdom into students – demonstrates that the idea of effortless learning and the power of intelligence was “cool” even 500 years ago. Today noninvasive brain stimulation (NIBS), which involves transcranial direct and alternating current stimulation (tDCS and tACS), as well as random noise (tRNS) and transcranial magnetic stimulation (TMS), could be regarded as a contemporary replacement for the magic funnel. They represent and extension to the more classical  methods for cognitive enhancement, such as behavioral training and computer games. On the other hand, there is still a number of alternative approaches that can affect cognitive function. Among the most prominent are: nutrition, drugs, exercise, meditation-related reduction in psychological stress and neurofeedback. The presentation will provide a concise overview of methods claiming to improve cognitive functioning – psychological constructs such as intelligence and working memory. Discussed will be changes in behavior and brain activation patterns observed with the electroencephalogram (EEG), functional magnetic resonance imaging (fMRI) and diffusion tensor imaging (DTI). Examined will be the usefulness of brain training for the man/woman in the street, as well as an additional device that can verify and bring causation into the relations between brain activity and cognition. Modulating brain plasticity and by that changing network dynamics crucial for intelligent behavior can be a powerful research tool that can elucidate the neurobiological background of intelligence, working memory and other psychological constructs.


June 29, 2017

Speaker: Grégory Simon (Caen Basse-Normandie  University - France, invited by Thomas Lachmann)

Topic: Inhibition: a key factor for the cognitive development

Abstract: In the Laboratory for the Psychology of Child Development and Education (LaPsyDÉ - https://www.lapsyde.com/), our research projects mostly focus on the key role of inhibition processing during cognitive development. After presenting our major works in this field, I will focus more precisely on applied results from both behavioral and imaging (MRI, ERPs) assessments that deal with reading acquisition.