Ruoxi Sun

ORCID: 0009-0002-5180-2954
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Anomaly Detection Techniques and Applications
  • Adversarial Robustness in Machine Learning
  • Human Pose and Action Recognition
  • Generative Adversarial Networks and Image Synthesis
  • Analog and Mixed-Signal Circuit Design
  • Image and Signal Denoising Methods
  • Software System Performance and Reliability
  • Speech and Audio Processing
  • Hand Gesture Recognition Systems
  • Video Surveillance and Tracking Methods
  • Neural Networks and Applications
  • Explainable Artificial Intelligence (XAI)
  • Software Testing and Debugging Techniques
  • Software Engineering Research

Soochow University
2024

Data61
2023

Commonwealth Scientific and Industrial Research Organisation
2023

Explainable artificial intelligence (XAI) is a new field within (AI) and machine learning (ML). XAI offers transparency of AI ML that can bridge the gap in information has been absent from "black-box" models. Given its nascency, there are several taxonomies literature. The current paper incorporates literature into one unifying framework, which defines types explanations, transparency, model methods together inform user's processes towards developing trust systems.

10.24251/hicss.2023.134 article EN Proceedings of the ... Annual Hawaii International Conference on System Sciences/Proceedings of the Annual Hawaii International Conference on System Sciences 2024-01-01

Video classification systems are vulnerable to adversarial attacks, which can create severe security problems in video verification. Current black-box attacks need a large number of queries succeed, resulting high computational overhead the process attack. On other hand, with restricted perturbations ineffective against defenses such as denoising or training. In this paper, we focus on unrestricted and propose StyleFool, attack via style transfer fool system. StyleFool first utilizes color...

10.1109/sp46215.2023.10179383 article EN 2022 IEEE Symposium on Security and Privacy (SP) 2023-05-01

The right to be forgotten mandates that machine learning models enable the erasure of a data owner's and information from trained model. Removing dataset alone is inadequate, as can memorize training data, increasing potential privacy risk users. To address this, multiple unlearning techniques have been developed deployed. Among them, approximate popular solution, but recent studies report its effectiveness not fully guaranteed. Another approach, exact unlearning, tackles this issue by...

10.48550/arxiv.2410.10128 preprint EN arXiv (Cornell University) 2024-10-13

Yawning detection is actively used in multimedia applications such as driver fatigue assessment and status monitoring. However, the accuracy robustness of existing yawning detectors are limited due to variations environments (especially lights), facial expressions, confusion behaviours (e.g., talking eating). This paper introduces a transformer-based method, YawnNet, for accurate by leveraging spatial-temporal encoding local cues. In particular, YawnNet contains data processing stage with...

10.1145/3652583.3657618 article EN 2024-05-30

Face authentication systems have brought significant convenience and advanced developments, yet they become unreliable due to their sensitivity inconspicuous perturbations, such as adversarial attacks. Existing defenses often exhibit weaknesses when facing various attack algorithms adaptive attacks or compromise accuracy for enhanced security. To address these challenges, we developed a novel highly efficient non-deep-learning-based image filter called the Iterative Window Mean Filter (IWMF)...

10.48550/arxiv.2408.10673 preprint EN arXiv (Cornell University) 2024-08-20

Coverage-guided Greybox Fuzzing (CGF) is one of the most successful and widely-used techniques for bug hunting. Two major approaches are adopted to optimize CGF: (i) reduce search space inputs by inferring relationships between input bytes path constraints; (ii) formulate fuzzing processes (e.g., transitions) build up probability distributions power schedules, i.e., number generated per seed. However, former subjective inference results which may include extra a constraint, thereby limiting...

10.48550/arxiv.2201.04441 preprint EN other-oa arXiv (Cornell University) 2022-01-01

Video classification systems are vulnerable to adversarial attacks, which can create severe security problems in video verification. Current black-box attacks need a large number of queries succeed, resulting high computational overhead the process attack. On other hand, with restricted perturbations ineffective against defenses such as denoising or training. In this paper, we focus on unrestricted and propose StyleFool, attack via style transfer fool system. StyleFool first utilizes color...

10.48550/arxiv.2203.16000 preprint EN other-oa arXiv (Cornell University) 2022-01-01
Coming Soon ...