Runkai Zheng

ORCID: 0000-0003-3120-5466
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Adversarial Robustness in Machine Learning
  • Digital Media Forensic Detection
  • Anomaly Detection Techniques and Applications
  • CCD and CMOS Imaging Sensors
  • Image Retrieval and Classification Techniques
  • Advanced Neural Network Applications
  • Image Processing Techniques and Applications
  • Advanced Bandit Algorithms Research
  • Artificial Intelligence in Healthcare and Education
  • Machine Learning and Data Classification
  • Domain Adaptation and Few-Shot Learning
  • Machine Learning and Algorithms
  • Infrared Target Detection Methodologies
  • Photoacoustic and Ultrasonic Imaging
  • Sparse and Compressive Sensing Techniques
  • COVID-19 diagnosis using AI
  • Face and Expression Recognition
  • Image and Signal Denoising Methods
  • Explainable Artificial Intelligence (XAI)

Carnegie Mellon University
2025

Chinese University of Hong Kong, Shenzhen
2022

Jinan University
2020

ChatGPT is a recent chatbot service released by OpenAI and receiving increasing attention over the past few months. While evaluations of various aspects have been done, its robustness, i.e., performance to unexpected inputs, still unclear public. Robustness particular concern in responsible AI, especially for safety-critical applications. In this paper, we conduct thorough evaluation robustness from adversarial out-of-distribution (OOD) perspective. To do so, employ AdvGLUE ANLI benchmarks...

10.48550/arxiv.2302.12095 preprint EN cc-by arXiv (Cornell University) 2023-01-01

10.1109/icassp49660.2025.10889122 article EN ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2025-03-12

A major challenge in Fine-Grained Visual Classification (FGVC) is distinguishing various categories with high inter-class similarity by learning the feature that differentiate details. Conventional cross entropy trained Convolutional Neural Network (CNN) fails this as it may suffer from producing invariant features FGVC. In work, we innovatively propose to regularize training of CNN enforcing uniqueness each category an information theoretic perspective. To achieve goal, formulate a minimax...

10.48550/arxiv.2011.10951 preprint EN cc-by arXiv (Cornell University) 2020-01-01

Vision Transformers (ViTs) have gained prominence as a preferred choice for wide range of computer vision tasks due to their exceptional performance. However, widespread adoption has raised concerns about security in the face malicious attacks. Most existing methods rely on empirical adjustments during training process, lacking clear theoretical foundation. In this study, we address gap by introducing SpecFormer, specifically designed enhance ViTs' resilience against adversarial attacks,...

10.48550/arxiv.2402.03317 preprint EN arXiv (Cornell University) 2024-01-02

The widespread success of deep learning models today is owed to the curation extensive datasets significant in size and complexity. However, such frequently pick up inherent biases data during training process, leading unreliable predictions. Diagnosing debiasing thus a necessity ensure reliable model performance. In this paper, we present CONBIAS, novel framework for diagnosing mitigating Concept co-occurrence Biases visual datasets. CONBIAS represents as knowledge graphs concepts, enabling...

10.48550/arxiv.2409.18055 preprint EN arXiv (Cornell University) 2024-09-26

Bayesian optimization (BO) is a well-established method to optimize black-box functions whose direct evaluations are costly. In this paper, we tackle the problem of incorporating expert knowledge into BO, with goal further accelerating optimization, which has received very little attention so far. We design multi-task learning architecture for task, jointly eliciting and minimizing objective function. particular, allows be transferred BO task. introduce specific based on Siamese neural...

10.48550/arxiv.2208.08742 preprint EN other-oa arXiv (Cornell University) 2022-01-01

Recent studies have shown that Deep Neural Networks (DNNs) are vulnerable to the backdoor attacks, which leads malicious behaviors of DNNs when specific triggers attached input images. It was further demonstrated infected possess a collection channels, more sensitive compared with normal channels. Pruning these channels then be effective in mitigating behaviors. To locate those it is natural consider their Lipschitzness, measures sensitivity against worst-case perturbations on inputs. In...

10.48550/arxiv.2208.03111 preprint EN other-oa arXiv (Cornell University) 2022-01-01
Coming Soon ...