Cheul Young Park

ORCID: 0000-0003-0414-272X
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Emotion and Mood Recognition
  • Personal Information Management and User Behavior
  • Mental Health Research Topics
  • Digital Mental Health Interventions
  • EEG and Brain-Computer Interfaces
  • Data Quality and Management
  • Advanced Text Analysis Techniques
  • Context-Aware Activity Recognition Systems
  • Topic Modeling
  • Speech Recognition and Synthesis
  • Sentiment Analysis and Opinion Mining
  • Attachment and Relationship Dynamics
  • Mobile Crowdsensing and Crowdsourcing
  • Ethics and Social Impacts of AI
  • Explainable Artificial Intelligence (XAI)
  • Social Robot Interaction and HRI
  • Neural dynamics and brain function
  • Scientific Computing and Data Management
  • Big Data and Business Intelligence

Korea Advanced Institute of Science and Technology
2020-2023

Korea Aerospace Research Institute
2023

Kootenay Association for Science & Technology
2022-2023

Carnegie Mellon University
2018

Recognizing emotions during social interactions has many potential applications with the popularization of low-cost mobile sensors, but a challenge remains lack naturalistic affective interaction data. Most existing emotion datasets do not support studying idiosyncratic arising in wild as they were collected constrained environments. Therefore, context requires novel dataset, and K-EmoCon is such multimodal dataset comprehensive annotations continuous conversations. The contains...

10.1038/s41597-020-00630-y article EN cc-by Scientific Data 2020-09-08

Increasing number of researchers and designers are envisioning a wide range novel proactive conversational services for smart speakers such as context-aware reminders restocking household items. When initiating interactions proactively, need to consider users' contexts minimize disruption. In this work, we aim broaden our understanding opportune moments in domestic contexts. Toward goal, built voice-based experience sampling device conducted one-week field study with 40 participants living...

10.1145/3411810 article EN Proceedings of the ACM on Interactive Mobile Wearable and Ubiquitous Technologies 2020-09-04

The automated recognition of human emotions plays an important role in developing machines with emotional intelligence. Major research efforts are dedicated to the development emotion methods. However, most affective computing models based on images, audio, videos and brain signals. Literature lacks works that focus utilizing only peripheral signals for (ER), which can be ideally implemented daily life settings. Therefore, this paper present a framework ER arousal valence space, using...

10.1109/jbhi.2022.3225330 article EN cc-by IEEE Journal of Biomedical and Health Informatics 2022-11-29

Abstract With the popularization of low-cost mobile and wearable sensors, several studies have used them to track analyze mental well-being, productivity, behavioral patterns. However, there is still a lack open datasets collected in real-world contexts with affective cognitive state labels such as emotion, stress, attention; limits research advances computing human-computer interaction. This study presents K-EmoPhone , multimodal dataset from 77 students over seven days. contains (1)...

10.1038/s41597-023-02248-2 article EN cc-by Scientific Data 2023-06-02

Mobile experience sampling methods (ESMs) are widely used to measure users' affective states by randomly sending self-report requests. However, this random probing can interrupt users and adversely influence emotional inducing disturbance stress. This work aims understand how ESMs themselves may compromise the validity of ESM responses what contextual factors contribute changes in emotions when respond ESMs. Towards goal, we analyze 2,227 samples mobile data collected from 78 participants....

10.1145/3491102.3501944 article EN CHI Conference on Human Factors in Computing Systems 2022-04-28

The automated recognition of human emotions plays an important role in developing machines with emotional intelligence. However, most the affective computing models are based on images, audio, videos and brain signals. There is a lack prior studies that focus utilizing only peripheral physiological signals for emotion recognition, which can ideally be implemented daily life settings using wearables, e.g., smartwatches. Here, classification method signals, obtained by wearable devices enable...

10.1109/embc46164.2021.9630252 article EN 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC) 2021-11-01

Creating datasets for ML is an inherently human endeavor, as the data's heterogeneity mandates intervention. However, most data workflows being one-time and hardly transferable leads to a lack of standardization reusability. There has been push impose more structure on work process, but little known about implicit or "tacit" knowledge workers, i.e., "know-how"s that difficult transfer others. Identifying formalizing this can help improve, leading it from current "exploration" systematic...

10.1145/3544549.3585616 article EN 2023-04-19

We thought data to be simply given, but reality tells otherwise; it is costly, situation-dependent, and muddled with dilemmas, constantly requiring human intervention. The ML community's focus on quality increasing in the same vein, as good vital for successful systems. Nonetheless, few works have investigated dataset builders specifics of what they do struggle make data. In this study, through semi-structured interviews 19 experts, we present humans actually consider each step construction...

10.48550/arxiv.2211.14981 preprint EN cc-by arXiv (Cornell University) 2022-01-01

Deep speaker embeddings have been shown effective for assessing cognitive impairments aside from their original purpose of verification. However, the research found that encode identity and an array information, including demographics, such as sex age, speech contents to extent, which are known confounders in assessment impairments. In this paper, we hypothesize content information separated using a framework voice conversion is more train simple classifiers comparative analysis on...

10.48550/arxiv.2203.10827 preprint EN other-oa arXiv (Cornell University) 2022-01-01
Coming Soon ...