Kathrin Seßler

ORCID: 0000-0002-3380-4641
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Topic Modeling
  • Online Learning and Analytics
  • Intelligent Tutoring Systems and Adaptive Learning
  • Natural Language Processing Techniques
  • Machine Learning and Data Classification
  • Text Readability and Simplification
  • Advanced Neural Network Applications
  • Artificial Intelligence in Healthcare and Education
  • Explainable Artificial Intelligence (XAI)
  • Educational Games and Gamification
  • Multimodal Machine Learning Applications
  • Educational Technology and Assessment
  • Innovative Teaching and Learning Methods
  • Software Engineering Research
  • Visual and Cognitive Learning Processes
  • Educational Assessment and Pedagogy

Technical University of Munich
2023-2025

TH Bingen University of Applied Sciences
2022

Heterogeneous tabular data are the most commonly used form of and essential for numerous critical computationally demanding applications. On homogeneous sets, deep neural networks have repeatedly shown excellent performance therefore been widely adopted. However, their adaptation to inference or generation tasks remains challenging. To facilitate further progress in field, this work provides an overview state-of-the-art learning methods data. We categorize these into three groups:...

10.1109/tnnls.2022.3229161 article EN cc-by IEEE Transactions on Neural Networks and Learning Systems 2022-12-23

Large language models represent a significant advancement in the field of AI. The underlying technology is key to further innovations and, despite critical views and even bans within communities regions, large are here stay. This position paper presents potential benefits challenges educational applications models, from student teacher perspectives. We briefly discuss current state their applications. then highlight how these can be used create content, improve engagement interaction,...

10.35542/osf.io/5er8f preprint EN 2023-01-30

Identifying logical errors in complex, incomplete or even contradictory and overall heterogeneous data like students' experimentation protocols is challenging. Recognizing the limitations of current evaluation methods, we investigate potential Large Language Models (LLMs) for automatically identifying student streamlining teacher assessments. Our aim to provide a foundation productive, personalized feedback. Using dataset 65 protocols, an Artificial Intelligence (AI) system based on GPT-3.5...

10.1016/j.caeai.2023.100177 article EN cc-by-nc-nd Computers and Education Artificial Intelligence 2023-01-01

The integration of Artificial Intelligence (AI), particularly Large Language Model (LLM)-based systems, in education has shown promise enhancing teaching and learning experiences. However, the advent Multimodal Models (MLLMs) like GPT-4 with vision (GPT-4V), capable processing multimodal data including text, sound, visual inputs, opens a new era enriched, personalized, interactive landscapes education. Grounded theory multimedia learning, this paper explores transformative role MLLMs central...

10.48550/arxiv.2401.00832 preprint EN cc-by-nc-nd arXiv (Cornell University) 2024-01-01

Tabular data is among the oldest and most ubiquitous forms of data. However, generation synthetic samples with original data's characteristics remains a significant challenge for tabular While many generative models from computer vision domain, such as variational autoencoders or adversarial networks, have been adapted generation, less research has directed towards recent transformer-based large language (LLMs), which are also in nature. To this end, we propose GReaT (Generation Realistic...

10.48550/arxiv.2210.06280 preprint EN cc-by-nc-nd arXiv (Cornell University) 2022-01-01

Effective feedback is essential for fostering students' success in scientific inquiry. With advancements artificial intelligence, large language models (LLMs) offer new possibilities delivering instant and adaptive feedback. However, this often lacks the pedagogical validation provided by real-world practitioners. To address limitation, our study evaluates compares quality of LLM agents with that human teachers science education experts on student-written experimentation protocols. Four...

10.48550/arxiv.2502.12842 preprint EN arXiv (Cornell University) 2025-02-18

Heterogeneous tabular data are the most commonly used form of and essential for numerous critical computationally demanding applications. On homogeneous sets, deep neural networks have repeatedly shown excellent performance therefore been widely adopted. However, their adaptation to inference or generation tasks remains challenging. To facilitate further progress in field, this work provides an overview state-of-the-art learning methods data. We categorize these into three groups:...

10.48550/arxiv.2110.01889 preprint EN cc-by-nc-nd arXiv (Cornell University) 2021-01-01

The manual assessment and grading of student writing is a time-consuming yet critical task for teachers. Recent developments in generative AI, such as large language models, offer potential solutions to facilitate essay-scoring tasks In our study, we evaluate the performance reliability both open-source closed-source LLMs assessing German essays, comparing their evaluations those 37 teachers across 10 pre-defined criteria (i.e., plot logic, expression). A corpus 20 real-world essays from...

10.48550/arxiv.2411.16337 preprint EN arXiv (Cornell University) 2024-11-25

The use of Large Language Models (LLMs) in mathematical reasoning has become a cornerstone related research, demonstrating the intelligence these models and enabling potential practical applications through their advanced performance, such as educational settings. Despite variety datasets in-context learning algorithms designed to improve ability LLMs automate problem solving, lack comprehensive benchmarking across different makes it complicated select an appropriate model for specific...

10.48550/arxiv.2408.10839 preprint EN arXiv (Cornell University) 2024-08-20

Identifying logical errors in complex, incomplete or even contradictory and overall heterogeneous data like students' experimentation protocols is challenging. Recognizing the limitations of current evaluation methods, we investigate potential Large Language Models (LLMs) for automatically identifying student streamlining teacher assessments. Our aim to provide a foundation productive, personalized feedback. Using dataset 65 protocols, an Artificial Intelligence (AI) system based on GPT-3.5...

10.48550/arxiv.2308.06088 preprint EN cc-by-nc-nd arXiv (Cornell University) 2023-01-01
Coming Soon ...