Mingyi Zhou

ORCID: 0000-0003-3514-0372
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Adversarial Robustness in Machine Learning
  • Advanced Malware Detection Techniques
  • Privacy-Preserving Technologies in Data
  • Voice and Speech Disorders
  • Software System Performance and Reliability
  • Stochastic Gradient Optimization Techniques
  • Modular Robots and Swarm Intelligence
  • Infrastructure Maintenance and Monitoring
  • Plasma Applications and Diagnostics
  • Vehicle License Plate Recognition
  • Model-Driven Software Engineering Techniques
  • Advanced Neuroimaging Techniques and Applications
  • Security and Verification in Computing
  • Generative Adversarial Networks and Image Synthesis
  • Parkinson's Disease Mechanisms and Treatments
  • Digital and Cyber Forensics
  • COVID-19 diagnosis using AI
  • Physical Unclonable Functions (PUFs) and Hardware Security
  • Artificial Intelligence in Healthcare
  • Radiomics and Machine Learning in Medical Imaging
  • Acupuncture Treatment Research Studies
  • Domain Adaptation and Few-Shot Learning
  • Natural Antidiabetic Agents Studies
  • Advanced Software Engineering Methodologies
  • Cybercrime and Law Enforcement Studies

Monash University
2023-2024

Australian Regenerative Medicine Institute
2024

Beihang University
2024

Henan University of Science and Technology
2023

University of Electronic Science and Technology of China
2018-2022

Ministry of Transport
2022

Vi Technology (United States)
2020

Megvii (China)
2020

Wuhan University of Technology
2017

University of Missouri
1995-1996

Different automated decision support systems based on artificial neural network (ANN) have been widely proposed for the detection of heart disease in previous studies. However, most these techniques focus preprocessing features only. In this paper, we both, i.e., refinement and elimination problems posed by predictive model, underfitting overfitting. By avoiding model from overfitting underfitting, it can show good performance both datasets, training data testing data. Inappropriate...

10.1109/access.2019.2904800 article EN cc-by-nc-nd IEEE Access 2019-01-01

Machine learning models are vulnerable to adversarial examples. For the black-box setting, current substitute attacks need pre-trained generate However, hard obtain in real-world tasks. In this paper, we propose a data-free training method (DaST) for without requirement of any real data. To achieve this, DaST utilizes specially designed generative networks (GANs) train models. particular, design multi-branch architecture and label-control loss model deal with uneven distribution synthetic...

10.1109/cvpr42600.2020.00031 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020-06-01

Parkinson's disease (PD) is the second most common neurodegenerative of central nervous system (CNS). Till now, there no definitive clinical examination that can diagnose a PD patient. However, it has been reported patients face deterioration in handwriting. Hence, different computer vision and machine learning researchers have proposed micrography based methods. But, these methods possess two main problems. The first problem biasedness models caused by imbalanced data i.e. show good...

10.1109/access.2019.2932037 article EN cc-by IEEE Access 2019-01-01

Numerous mobile apps have leveraged deep learning capabilities. However, on-device models are vulnerable to attacks as they can be easily extracted from their corresponding apps. Although the structure and parameters information of these accessed, existing attacking approaches only generate black-box (i.e., indirect white-box attacks), which less effective efficient than strategies. This is because (DL) frameworks like TensorFlow Lite (TFLite) do not support gradient computing (referred...

10.1145/3597503.3639144 preprint EN cc-by 2024-04-12

Federated Learning (FL) is a distributed learning paradigm that enhances users' privacy by eliminating the need for clients to share raw, private data with server. Despite success, recent studies expose vulnerability of FL model inversion attacks, where adversaries reconstruct users’ via eavesdropping on shared gradient information. We hypothesize key factor in success such attacks low entanglement among gradients per within batch during stochastic optimization. This creates an adversary can...

10.1609/aaai.v38i19.30171 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2024-03-24

More and more edge devices mobile apps are leveraging deep learning (DL) capabilities. Deploying such models on – referred to as on-device rather than remote cloud-hosted services, has gained popularity because it avoids transmitting user's data off of the device achieves high response time. However, can be easily attacked, they accessed by unpacking corresponding model is fully exposed attackers. Recent studies show that attackers generate white-box-like attacks for an or even inverse its...

10.1145/3597926.3598113 preprint EN 2023-07-12

A single perturbation can pose the most natural images to be misclassified by classifiers. In black-box setting, current universal adversarial attack methods utilize substitute models generate perturbation, then apply attacked model. However, this transfer often produces inferior results. study, we directly work in setting perturbation. Besides, aim design an adversary generating a having texture like stripes based on orthogonal matrix, as top convolutional layers are sensitive stripes. To...

10.48550/arxiv.2009.07024 preprint EN other-oa arXiv (Cornell University) 2020-01-01

In order to overcome the difficulty in rapid detection for expressway tunnels, coherence calculation of dual-frequency radar signal time domain is proposed suppress interference. A (400 MHz and 900 MHz) GPR a manipulator are developed. horizontal direction, it has −90°~90° range, vertical rotation range from 0 90°, length 5~9.5 m, an antenna between −40°~40°. The mobile scanning been realized tunnel, taking vehicle as carrier. research shows that: This equipment realizes tunnel without...

10.3390/app12063148 article EN cc-by Applied Sciences 2022-03-19

Machine learning models are vulnerable to adversarial examples. For the black-box setting, current substitute attacks need pre-trained generate However, hard obtain in real-world tasks. In this paper, we propose a data-free training method (DaST) for without requirement of any real data. To achieve this, DaST utilizes specially designed generative networks (GANs) train models. particular, design multi-branch architecture and label-control loss model deal with uneven distribution synthetic...

10.48550/arxiv.2003.12703 preprint EN other-oa arXiv (Cornell University) 2020-01-01

In our daily life environment, there are a lot of micro energy such as vibration, low thermal energy. the past due to limitations technical capabilities, this has not been effectively collected and utilized. With people's increasing attention effective use environmental protection, clean, renewable have become focus research in related fields. paper, by means electromagnetic vibration acquisition system at home abroad, working principle collector is clarified. On basis, designed realized, we...

10.1016/j.egypro.2017.10.274 article EN Energy Procedia 2017-10-01

Recently, adversarial attack methods have been developed to challenge the robustness of machine learning models. However, mainstream evaluation criteria experience limitations, even yielding discrepancies among results under different settings. By examining various algorithms, including gradient-based and query-based attacks, we notice lack a consensus on uniform standard for unbiased performance evaluation. Accordingly, propose Piece-wise Sampling Curving (PSC) toolkit effectively address...

10.48550/arxiv.2104.11103 preprint EN other-oa arXiv (Cornell University) 2021-01-01

Deploying DL models on mobile Apps has become ever-more popular. However, existing studies show attackers can easily reverse-engineer in to steal intellectual property or generate effective attacks. A recent approach, Model Obfuscation, been proposed defend against such reverse engineering by obfuscating model representations, as weights and computational graphs, without affecting performance. These obfuscation methods use static obfuscate the representation, they half-dynamic but require...

10.1145/3691620.3694998 preprint EN 2024-10-18

Recent studies show that deployed deep learning (DL) models such as those of Tensor Flow Lite (TFLite) can be easily extracted from real-world applications and devices by attackers to generate many kinds attacks like adversarial attacks. Although securing on-device DL has gained increasing attention, no existing methods fully prevent the aforementioned threats. Traditional software protection techniques have been widely explored, if implemented using pure code, C++, it will open possibility...

10.48550/arxiv.2403.16479 preprint EN arXiv (Cornell University) 2024-03-25

In recent years, Large Language Models (LLMs) have gained widespread use, raising concerns about their security. Traditional jailbreak attacks, which often rely on the model internal information or limitations when exploring unsafe behavior of victim model, limiting reducing general applicability. this paper, we introduce PathSeeker, a novel black-box method, is inspired by game rats escaping maze. We think that each LLM has its unique "security maze", and attackers attempt to find exit...

10.48550/arxiv.2409.14177 preprint EN arXiv (Cornell University) 2024-09-21

When mobile meets LLMs, app users deserve to have more intelligent usage experiences. For this happen, we argue that there is a strong need apply LLMs for the ecosystem. We therefore provide research roadmap guiding our fellow researchers achieve as whole. In roadmap, sum up six directions believe are urgently required enable native intelligence in devices. each direction, further summarize current progress and gaps still be filled by researchers.

10.1145/3708528 article EN ACM Transactions on Software Engineering and Methodology 2024-12-20
Coming Soon ...