Melissa L.‐H. Võ

ORCID: 0000-0003-1145-4473
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Visual Attention and Saliency Detection
  • Visual perception and processing mechanisms
  • Gaze Tracking and Assistive Technology
  • Face Recognition and Perception
  • Neural and Behavioral Psychology Studies
  • Advanced Image and Video Retrieval Techniques
  • Categorization, perception, and language
  • Child and Animal Learning Development
  • Image Retrieval and Classification Techniques
  • Language, Metaphor, and Cognition
  • Multisensory perception and integration
  • Memory Processes and Influences
  • Virtual Reality Applications and Impacts
  • Olfactory and Sensory Function Studies
  • Neurobiology of Language and Bilingualism
  • Reading and Literacy Development
  • Neural dynamics and brain function
  • Visual and Cognitive Learning Processes
  • Spatial Cognition and Navigation
  • Neural Networks and Applications
  • Speech and dialogue systems
  • Infrared Target Detection Methodologies
  • Action Observation and Synchronization
  • Augmented Reality Applications
  • Radiology practices and education

Goethe University Frankfurt
2016-2025

Ludwig-Maximilians-Universität München
2007-2025

Goethe Institute
2017-2025

Hôpital Bretonneau
2024

Centre Hospitalier Universitaire de Tours
2024

Individual Development and Adaptive Education
2017-2020

Brigham and Women's Hospital
2010-2014

Harvard University
2010-2014

University of Edinburgh
2008-2010

Freie Universität Berlin
2005-2007

Researchers have shown that people often miss the occurrence of an unexpected yet salient event if they are engaged in a different task, phenomenon known as inattentional blindness. However, demonstrations blindness typically involved naive observers unfamiliar task. What about expert searchers who spent years honing their ability to detect small abnormalities specific types images? We asked 24 radiologists perform familiar lung-nodule detection A gorilla, 48 times size average nodule, was...

10.1177/0956797613479386 article EN Psychological Science 2013-07-17

Abstract Mixed-effects models are a powerful tool for modeling fixed and random effects simultaneously, but do not offer feasible analytic solution estimating the probability that test correctly rejects null hypothesis. Being able to estimate this probability, however, is critical sample size planning, as power closely linked reliability replicability of empirical findings. A flexible very intuitive alternative solutions simulation-based analyses. Although various tools conducting analyses...

10.3758/s13428-021-01546-0 article EN cc-by Behavior Research Methods 2021-05-05

In sentence processing, semantic and syntactic violations elicit differential brain responses observable in event-related potentials: An N400 signals violations, whereas a P600 marks inconsistent structure. Does the register similar distinctions scene perception? To address this question, we presented participants with inconsistencies, which an object was incongruent scene's meaning, violated structural rules. We found clear dissociation between processing: Semantic inconsistencies produced...

10.1177/0956797613476955 article EN Psychological Science 2013-07-10

Abstract The study presented here investigated the effects of emotional valence on memory for words by assessing both performance and pupillary responses during a recognition task. Participants had to make speeded judgments whether word in test phase experiment already been (“old”) or not (“new”). An emotion‐induced bias was observed: Words with content only produced higher amount hits, but also elicited more false alarms than neutral words. Further, we found distinct pupil old/new effect...

10.1111/j.1469-8986.2007.00606.x article EN Psychophysiology 2007-10-02

It has been shown that attention and eye movements during scene perception are preferentially allocated to semantically inconsistent objects compared their consistent controls. However, there a dispute over how early viewing such inconsistencies detected. In the study presented here, we introduced syntactic object–scene (i.e., floating objects) in addition semantic investigate degree which they attract viewing. Experiment 1 participants viewed scenes preparation for subsequent memory task,...

10.1167/9.3.24 article EN cc-by-nc-nd Journal of Vision 2009-03-01

Abstract What controls gaze allocation during dynamic face perception? We monitored participants' eye movements while they watched videos featuring close-ups of pedestrians engaged in interviews. Contrary to previous findings using static displays, we observed no general preference fixate eyes. Instead, was dynamically directed the eyes, nose, or mouth response currently depicted event. Fixations eyes increased when a made contact with camera, fixations speaking. When moved quickly,...

10.1167/12.13.3 article EN cc-by-nc-nd Journal of Vision 2012-12-03

Abstract Modern imaging methods like computed tomography (CT) generate 3-D volumes of image data. How do radiologists search through such images? Are certain strategies more efficient? Although there is a large literature devoted to understanding in 2-D, relatively little known about volumetric space. In recent years, with the ever-increasing popularity medical imaging, this question has taken on increased importance as we try understand, and ultimately reduce, errors diagnostic radiology....

10.1167/13.10.3 article EN cc-by-nc-nd Journal of Vision 2013-08-06

Predictions of environmental rules (here referred to as "scene grammar") can come in different forms: seeing a toilet living room would violate semantic predictions, while finding brush next the toothpaste syntactic predictions. The existence such predictions has usually been investigated by showing observers images containing grammatical violations. Conversely, generative process creating an environment according one's scene grammar and its effects on behavior memory received little...

10.1038/s41598-017-16739-x article EN cc-by Scientific Reports 2017-11-22

A brief glimpse of a scene is sufficient to comprehend its gist. Does information available from also support further exploration? In five experiments, we investigated the role initial processing on eye movement guidance for visual search in scenes. We used flash-preview moving-window paradigm separate duration subsequent search. By varying preview durations, found that 75-ms was lead increased benefits compared no-preview control. Search efficiency by inserting additional scene-target...

10.1167/10.3.14 article EN cc-by-nc-nd Journal of Vision 2010-01-01

One might assume that familiarity with a scene or previous encounters objects embedded in would benefit subsequent search for those items. However, series of experiments we show this is not the case: When participants were asked to subsequently multiple same scene, performance remained essentially unchanged over course searches despite increasing familiarity. Similarly, looking at target during previews, which included letter search, 30 seconds free viewing, even memorizing also did later...

10.1037/a0024147 article EN Journal of Experimental Psychology Human Perception & Performance 2011-06-20

The arrangement of the contents real-world scenes follows certain spatial rules that allow for extremely efficient visual exploration. What remains underexplored is role different types objects hold in a scene. In current work, we seek to unveil an important building block scenes—anchor objects. Anchors specific predictions regarding likely position other environment. series three eye tracking experiments tested what anchor occupy during search. all experiments, participants searched through...

10.1167/18.13.11 article EN cc-by-nc-nd Journal of Vision 2018-12-18

Abstract Memorizing critical objects and their locations is an essential part of everyday life. In the present study, incidental encoding in naturalistic scenes during search was compared to explicit memorization those scenes. To investigate if prior knowledge scene structure influences these two types differently, we used meaningless arrays as well real-world, semantically meaningful images. Surprisingly, when participants were asked recall scenes, memory performance markedly better for...

10.1167/14.8.10 article EN cc-by-nc-nd Journal of Vision 2014-07-11

People are surprisingly bad at knowing where they have looked in a scene. We tested participants' ability to recall their own eye movements 2 experiments using natural or artificial scenes. In each experiment, participants performed change-detection (Exp.1) search (Exp.2) task. On 25% of trials, after 3 seconds viewing the scene, were asked indicate thought had just fixated. They responded by making mouse clicks on 12 locations unchanged After 135 observers saw 10 new scenes and put someone...

10.1037/xhp0000264 article EN other-oa Journal of Experimental Psychology Human Perception & Performance 2016-09-26
Coming Soon ...