- Topic Modeling
- Online Learning and Analytics
- Intelligent Tutoring Systems and Adaptive Learning
- Natural Language Processing Techniques
- Machine Learning and Data Classification
- Text Readability and Simplification
- Advanced Neural Network Applications
- Artificial Intelligence in Healthcare and Education
- Explainable Artificial Intelligence (XAI)
- Educational Games and Gamification
- Multimodal Machine Learning Applications
- Educational Technology and Assessment
- Innovative Teaching and Learning Methods
- Software Engineering Research
- Visual and Cognitive Learning Processes
- Educational Assessment and Pedagogy
Technical University of Munich
2023-2025
TH Bingen University of Applied Sciences
2022
Heterogeneous tabular data are the most commonly used form of and essential for numerous critical computationally demanding applications. On homogeneous sets, deep neural networks have repeatedly shown excellent performance therefore been widely adopted. However, their adaptation to inference or generation tasks remains challenging. To facilitate further progress in field, this work provides an overview state-of-the-art learning methods data. We categorize these into three groups:...
Large language models represent a significant advancement in the field of AI. The underlying technology is key to further innovations and, despite critical views and even bans within communities regions, large are here stay. This position paper presents potential benefits challenges educational applications models, from student teacher perspectives. We briefly discuss current state their applications. then highlight how these can be used create content, improve engagement interaction,...
Identifying logical errors in complex, incomplete or even contradictory and overall heterogeneous data like students' experimentation protocols is challenging. Recognizing the limitations of current evaluation methods, we investigate potential Large Language Models (LLMs) for automatically identifying student streamlining teacher assessments. Our aim to provide a foundation productive, personalized feedback. Using dataset 65 protocols, an Artificial Intelligence (AI) system based on GPT-3.5...
The integration of Artificial Intelligence (AI), particularly Large Language Model (LLM)-based systems, in education has shown promise enhancing teaching and learning experiences. However, the advent Multimodal Models (MLLMs) like GPT-4 with vision (GPT-4V), capable processing multimodal data including text, sound, visual inputs, opens a new era enriched, personalized, interactive landscapes education. Grounded theory multimedia learning, this paper explores transformative role MLLMs central...
Tabular data is among the oldest and most ubiquitous forms of data. However, generation synthetic samples with original data's characteristics remains a significant challenge for tabular While many generative models from computer vision domain, such as variational autoencoders or adversarial networks, have been adapted generation, less research has directed towards recent transformer-based large language (LLMs), which are also in nature. To this end, we propose GReaT (Generation Realistic...
Effective feedback is essential for fostering students' success in scientific inquiry. With advancements artificial intelligence, large language models (LLMs) offer new possibilities delivering instant and adaptive feedback. However, this often lacks the pedagogical validation provided by real-world practitioners. To address limitation, our study evaluates compares quality of LLM agents with that human teachers science education experts on student-written experimentation protocols. Four...
Heterogeneous tabular data are the most commonly used form of and essential for numerous critical computationally demanding applications. On homogeneous sets, deep neural networks have repeatedly shown excellent performance therefore been widely adopted. However, their adaptation to inference or generation tasks remains challenging. To facilitate further progress in field, this work provides an overview state-of-the-art learning methods data. We categorize these into three groups:...
The manual assessment and grading of student writing is a time-consuming yet critical task for teachers. Recent developments in generative AI, such as large language models, offer potential solutions to facilitate essay-scoring tasks In our study, we evaluate the performance reliability both open-source closed-source LLMs assessing German essays, comparing their evaluations those 37 teachers across 10 pre-defined criteria (i.e., plot logic, expression). A corpus 20 real-world essays from...
The use of Large Language Models (LLMs) in mathematical reasoning has become a cornerstone related research, demonstrating the intelligence these models and enabling potential practical applications through their advanced performance, such as educational settings. Despite variety datasets in-context learning algorithms designed to improve ability LLMs automate problem solving, lack comprehensive benchmarking across different makes it complicated select an appropriate model for specific...
Identifying logical errors in complex, incomplete or even contradictory and overall heterogeneous data like students' experimentation protocols is challenging. Recognizing the limitations of current evaluation methods, we investigate potential Large Language Models (LLMs) for automatically identifying student streamlining teacher assessments. Our aim to provide a foundation productive, personalized feedback. Using dataset 65 protocols, an Artificial Intelligence (AI) system based on GPT-3.5...