- Humor Studies and Applications
- Video Analysis and Summarization
- Multimodal Machine Learning Applications
- Explainable Artificial Intelligence (XAI)
- Artificial Intelligence in Healthcare and Education
- Topic Modeling
- Online Learning and Analytics
École Polytechnique Fédérale de Lausanne
2023-2024
Language models (LMs) have recently shown remarkable performance on reasoning tasks by explicitly generating intermediate inferences, e.g., chain-of-thought prompting. However, these inference steps may be inappropriate deductions from the initial context and lead to incorrect final predictions. Here we introduce REFINER, a framework for finetuning LMs generate while interacting with critic model that provides automated feedback reasoning. Specifically, structured LM uses iteratively improve...
AI assistants, such as ChatGPT, are being increasingly used by students in higher education institutions. While these tools provide opportunities for improved teaching and education, they also pose significant challenges assessment learning outcomes. We conceptualize through the lens of vulnerability, potential university assessments outcomes to be impacted student use generative AI. investigate scale this vulnerability measuring degree which assistants can complete questions standard...
The automatic detection of humor poses a grand challenge for natural language processing. Transformer-based systems have recently achieved remarkable results on this task, but they usually (1) were evaluated in setups where serious vs humorous texts came from entirely different sources, and (2) focused benchmarking performance without providing insights into how the models work. We make progress both respects by training analyzing transformer-based recognition introduced dataset consisting...
The automatic detection of humor poses a grand challenge for natural language processing. Transformer-based systems have recently achieved remarkable results on this task, but they usually (1)~were evaluated in setups where serious vs humorous texts came from entirely different sources, and (2)~focused benchmarking performance without providing insights into how the models work. We make progress both respects by training analyzing transformer-based recognition introduced dataset consisting...