Teresa Yeo

ORCID: 0000-0003-4971-9246
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Domain Adaptation and Few-Shot Learning
  • Adversarial Robustness in Machine Learning
  • Anomaly Detection Techniques and Applications
  • Machine Learning and Data Classification
  • Advanced Neural Network Applications
  • Machine Learning and Algorithms
  • Visual Attention and Saliency Detection
  • Topic Modeling
  • COVID-19 diagnosis using AI
  • Image Enhancement Techniques
  • Generative Adversarial Networks and Image Synthesis
  • Control Systems and Identification
  • Multimodal Machine Learning Applications
  • Educational Technology and Assessment
  • Natural Language Processing Techniques
  • Explainable Artificial Intelligence (XAI)
  • Data Visualization and Analytics
  • Gaussian Processes and Bayesian Inference
  • Advanced Data Processing Techniques
  • Neural Networks and Applications
  • Human Pose and Action Recognition
  • Bayesian Methods and Mixture Models
  • Video Analysis and Summarization
  • Intelligent Tutoring Systems and Adaptive Learning

École Polytechnique Fédérale de Lausanne
2021-2023

Swiss Epilepsy Center
2018-2019

We introduce a set of image transformations that can be used as corruptions to evaluate the robustness models well data augmentation mechanisms for training neural networks. The primary distinction proposed is that, unlike existing approaches such Common Corruptions [27], geometry scene incorporated in - thus leading are more likely occur real world. also semantic (e.g. natural object occlusions. See Fig. 1). show these 'efficient' (can computed on-the-fly), 'extendable' applied on most...

10.1109/cvpr52688.2022.01839 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022-06-01

Machine learning models, meticulously optimized for source data, often fail to predict target data when faced with distribution shifts (DSs). Previous benchmarking studies, though extensive, have mainly focused on simple DSs. Recognizing that DSs occur in more complex forms real-world scenarios, we broadened our study include multiple concurrent shifts, such as unseen domain combined spurious correlations. We evaluated 26 algorithms range from heuristic augmentations zero-shot inference...

10.48550/arxiv.2501.04288 preprint EN arXiv (Cornell University) 2025-01-08

Current machine learning models for vision are often highly specialized and limited to a single modality task. In contrast, recent large language exhibit wide range of capabilities, hinting at possibility similarly versatile in computer vision. this paper, we take step direction propose multimodal training scheme called 4M. It consists unified Transformer encoder-decoder using masked modeling objective across input/output modalities - including text, images, geometric, semantic modalities,...

10.48550/arxiv.2312.06647 preprint EN other-oa arXiv (Cornell University) 2023-01-01

We present a method for making neural network predictions robust to shifts from the training data distribution. The proposed is based on via diverse set of cues (called 'middle domains') and ensembling them into one strong prediction. premise idea that made different respond differently distribution shift, hence should be able merge final perform merging in straightforward but principled manner uncertainty associated with each evaluations are performed using multiple tasks datasets...

10.1109/iccv48922.2021.01197 article EN 2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021-10-01

We consider the machine teaching problem in a classroom-like setting wherein teacher has to deliver same examples diverse group of students. Their diversity stems from differences their initial internal states as well learning rates. prove that with full knowledge about dynamics students can teach target concept entire classroom using O(min{d,N}log 1/ɛ) exam-ples, where d is ambient dimension problem, N number learners, and ɛ accuracy parameter. show robustness our strategy when limited...

10.1609/aaai.v33i01.33015684 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2019-07-17

In this work, we present a method to control text-to-image generative model produce training data specifically "useful" for supervised learning. Unlike previous works that employ an open-loop approach and pre-define prompts generate new using either language or human expertise, develop automated closed-loop system which involves two feedback mechanisms. The first mechanism uses from given finds adversarial result in image generations maximize the loss. While these diverse informed by model,...

10.48550/arxiv.2403.15309 preprint EN arXiv (Cornell University) 2024-03-22

We introduce a set of image transformations that can be used as corruptions to evaluate the robustness models well data augmentation mechanisms for training neural networks. The primary distinction proposed is that, unlike existing approaches such Common Corruptions, geometry scene incorporated in -- thus leading are more likely occur real world. also semantic (e.g. natural object occlusions). show these `efficient' (can computed on-the-fly), `extendable' applied on most datasets), expose...

10.48550/arxiv.2203.01441 preprint EN other-oa arXiv (Cornell University) 2022-01-01

When developing deep learning models, we usually decide what task want to solve then search for a model that generalizes well on the task. An intriguing question would be: if, instead of fixing and searching in space, fix space? Can find tasks on? How do they look, or indicate anything? These are questions address this paper. We propose discovery framework automatically finds examples such via optimizing generalization-based quantity called agreement score. demonstrate one set images can...

10.48550/arxiv.2212.00261 preprint EN other-oa arXiv (Cornell University) 2022-01-01

We present a method for making neural network predictions robust to shifts from the training data distribution. The proposed is based on via diverse set of cues (called 'middle domains') and ensembling them into one strong prediction. premise idea that made different respond differently distribution shift, hence should be able merge final perform merging in straightforward but principled manner uncertainty associated with each evaluations are performed using multiple tasks datasets...

10.48550/arxiv.2103.10919 preprint EN other-oa arXiv (Cornell University) 2021-01-01

We propose a method for adapting neural networks to distribution shifts at test-time. In contrast training-time robustness mechanisms that attempt anticipate and counter the shift, we create closed-loop system make use of test-time feedback signal adapt network on fly. show this loop can be effectively implemented using learning-based function, which realizes an amortized optimizer network. This leads adaptation method, named Rapid Network Adaptation (RNA), is notably more flexible orders...

10.48550/arxiv.2309.15762 preprint EN other-oa arXiv (Cornell University) 2023-01-01

We propose a method for adapting neural networks to distribution shifts at test-time. In contrast training-time robustness mechanisms that attempt anticipate and counter the shift, we create closed-loop system make use of test-time feedback signal adapt network on fly. show this loop can be effectively implemented using learning-based function, which realizes an amortized optimizer network. This leads adaptation method, named Rapid Network Adaptation (RNA), is notably more flexible orders...

10.1109/iccv51070.2023.00431 article EN 2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2023-10-01

We consider the machine teaching problem in a classroom-like setting wherein teacher has to deliver same examples diverse group of students. Their diversity stems from differences their initial internal states as well learning rates. prove that with full knowledge about dynamics students can teach target concept entire classroom using O(min{d,N} log(1/eps)) examples, where d is ambient dimension problem, N number learners, and eps accuracy parameter. show robustness our strategy when limited...

10.48550/arxiv.1811.03537 preprint EN other-oa arXiv (Cornell University) 2018-01-01

Visual perception entails solving a wide set of tasks, e.g., object detection, depth estimation, etc. The predictions made for multiple tasks from the same image are not independent, and therefore, expected to be consistent. We propose broadly applicable fully computational method augmenting learning with Cross-Task Consistency. proposed formulation is based on inference-path invariance over graph arbitrary tasks. observe that cross-task consistency leads more accurate better generalization...

10.48550/arxiv.2006.04096 preprint EN other-oa arXiv (Cornell University) 2020-01-01
Coming Soon ...