Felipe Urrutia

ORCID: 0000-0003-0809-5334
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Online Learning and Analytics
  • Intelligent Tutoring Systems and Adaptive Learning
  • Topic Modeling
  • Educational Technology and Assessment
  • Explainable Artificial Intelligence (XAI)
  • Text Readability and Simplification
  • Student Assessment and Feedback
  • Radiative Heat Transfer Studies
  • Educational Assessment and Pedagogy
  • Software Engineering Research
  • Educational Strategies and Epistemologies
  • Data Quality and Management
  • Innovative Teaching and Learning Methods

Centro de Recursos Educativos Avanzados
2022-2023

University of Chile
2022-2023

Education Development Center
2023

Universidad Bernardo O'Higgins
2023

This paper introduces a novel algorithm for constructing decision trees using large language models (LLMs) in zero-shot manner based on Classification and Regression Trees (CART) principles. Traditional tree induction methods rely heavily labeled data to recursively partition criteria such as information gain or the Gini index. In contrast, we propose method that uses pre-trained knowledge embedded LLMs build without requiring training data. Our approach leverages perform operations...

10.48550/arxiv.2501.16247 preprint EN arXiv (Cornell University) 2025-01-27

Written answers to open-ended questions can have a higher long-term effect on learning than multiple-choice questions. However, it is critical that teachers immediately review the answers, and ask redo those are incoherent. This be difficult task time-consuming for teachers. A possible solution automate detection of incoherent answers. One option with Large Language Models (LLM). They powerful discursive ability used explain decisions. In this paper, we analyze responses fourth graders in...

10.1177/07356331231191174 article EN Journal of Educational Computing Research 2023-11-10

Abstract Arguing and communicating is one of the basic skills mathematics curriculum. Making them in written form facilitates rigorous reasoning. It allows peers to review argumentation, receive feedback from them. Even though they generate additional cognitive efforts calculation process, enhance long-term retention facilitate deeper understanding. However, developing arguing competences elementary school classrooms are a great challenge. requires at least two conditions: all students write...

10.21203/rs.3.rs-2566472/v1 preprint EN cc-by Research Square (Research Square) 2023-02-21

Arguing and communicating are basic skills in the mathematics curriculum. Making arguments written form facilitates rigorous reasoning. It allows peers to review arguments, receive feedback about them. Even though it requires additional cognitive effort calculation process, enhances long-term retention deeper understanding. However, developing these competencies elementary school classrooms is a great challenge. at least two conditions: all students write immediate feedback. One solution use...

10.3390/systems11070353 article EN cc-by Systems 2023-07-10

We propose a general method to break down main complex task into set of intermediary easier sub-tasks, which are formulated in natural language as binary questions related the final target task. Our allows for representing each example by vector consisting answers these questions. call this representation Natural Language Learned Features (NLLF). NLLF is generated small transformer model (e.g., BERT) that has been trained Inference (NLI) fashion, using weak labels automatically obtained from...

10.18653/v1/2023.emnlp-main.229 article EN cc-by Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2023-01-01

Predicting long-term student achievement is a critical task for teachers and educational data mining. However, most of the models do not consider two typical situations in real-life classrooms. The first that develop their own questions online formative assessment. Therefore, there are huge number possible questions, each which answered by only few students. Second, assessment often involves open-ended students answer writing. These types highly valuable. analyzing responses automatically...

10.3390/jintelligence10040082 article EN cc-by Journal of Intelligence 2022-10-10

: Predicting long-term student learning is a critical task for teachers and educational data mining. However, most of the models do not consider two typical situations in real-life classrooms. The first that develop their own questions formative assessment. Therefore, there are huge number possible questions, each which answered by only few students. Second, assessment often involved open-ended students answer writing. These types highly valuable. analyzing responses automatically can be...

10.20944/preprints202204.0002.v1 preprint EN 2022-04-01

Predicting long-term student achievement is a critical task for teachers and educational data mining. However, most of the models do not consider two typical situations in real-life classrooms. The first that develop their own questions online formative assessment. Therefore, there are huge number possible questions, each which answered by only few students. Second, assessment often involves open-ended students answer writing. These types highly valuable. analyzing responses automatically...

10.20944/preprints202207.0170.v1 preprint EN 2022-07-12

Written answers to open-ended questions can have a higher long-term effect on learning than multiple-choice questions. However, it is critical that teachers immediately review the answers, and ask redo those are incoherent. This be difficult task time-consuming for teachers. A possible solution automate detection of incoherent answers. One option with Large Language Models (LLM). In this paper, we analyze responses fourth graders in mathematics using three LLMs: GPT-3, BLOOM, YOU. We used...

10.48550/arxiv.2304.11257 preprint EN cc-by arXiv (Cornell University) 2023-01-01

We propose a general method to break down main complex task into set of intermediary easier sub-tasks, which are formulated in natural language as binary questions related the final target task. Our allows for representing each example by vector consisting answers these questions. call this representation Natural Language Learned Features (NLLF). NLLF is generated small transformer model (e.g., BERT) that has been trained Inference (NLI) fashion, using weak labels automatically obtained from...

10.48550/arxiv.2311.05754 preprint EN cc-by arXiv (Cornell University) 2023-01-01
Coming Soon ...