- Adversarial Robustness in Machine Learning
- Anomaly Detection Techniques and Applications
- Topic Modeling
- Software Testing and Debugging Techniques
- Explainable Artificial Intelligence (XAI)
- Machine Learning and Data Classification
- Bacillus and Francisella bacterial research
- Software Engineering Research
- Biomedical Text Mining and Ontologies
- Natural Language Processing Techniques
- Simulation and Modeling Applications
- Reinforcement Learning in Robotics
- Machine Learning and Algorithms
- Software System Performance and Reliability
- Software Reliability and Analysis Research
- Privacy-Preserving Technologies in Data
- Cardiac Arrest and Resuscitation
- Advanced Software Engineering Methodologies
- Water Quality Monitoring Technologies
- Advanced Neural Network Applications
- Technology and Security Systems
- Underwater Acoustics Research
- Mobile and Web Applications
- Underwater Vehicles and Communication Systems
- Advanced SAR Imaging Techniques
University of Illinois Urbana-Champaign
2019-2023
Xiangya Hospital Central South University
2022
Central South University
2022
University of Illinois System
2021
ETH Zurich
2020
University of Edinburgh
2020
Tsinghua University
2017
Tongji University
2012
Hunan Police Academy
2010
Great advances in deep neural networks (DNNs) have led to state-of-the-art performance on a wide range of tasks. However, recent studies shown that DNNs are vulnerable adversarial attacks, which brought great concerns when deploying these models safety-critical applications such as autonomous driving. Different defense approaches been proposed against including: a) empirical defenses, can usually be adaptively attacked again without providing robustness certification; and b) certifiably...
Along with the broad deployment of deep learning (DL) systems, their lack trustworthiness, such as robustness, fairness, and numerical reliability, is raising serious social concerns, especially in safety-critical scenarios autonomous driving aircraft navigation. Hence, a rigorous accurate evaluation trustworthiness DL systems essential would be prerequisite for improving trustworthiness. The first part talk will an overview certified methods These provide computable guarantees terms...
Certified robustness is a critical measure for assessing the reliability of machine learning systems. Traditionally, computational burden associated with certifying models has posed substantial challenge, particularly continuous expansion model sizes. In this paper, we introduce an innovative approach to expedite verification process L2-norm certified through sparse transfer learning. Our both efficient and effective. It leverages results obtained from pre-training tasks applies updates...
As machine learning (ML) systems become pervasive, safeguarding their security is critical. However, recently it has been demonstrated that motivated adversaries are able to mislead ML by perturbing test data using semantic transformations. While there exists a rich body of research providing provable robustness guarantees for models against Lp bounded adversarial perturbations, perturbations remain largely underexplored. In this paper, we provide TSS-a unified framework certifying general...
Entity relation extraction technology can be used to extract entities and relations from medical literature, automatically establish professional mapping knowledge domains. The classical text classification model, convolutional neural networks for sentence (TEXTCNN), has been shown have good performance, but also a long-distance dependency problem, which is common problem of (CNNs). Recurrent (RNN) address the cannot capture features at specific scale in text.To solve these problems, this...
Following the Service-Oriented Architecture, a large number of diversified Cloud services are exposed as Web APIs (Application Program Interface), which serve contracts between service providers and consumers. Due to their massive broad applications, any flaw in cloud may lead serious consequences. API testing is thus necessary ensure availability, reliability, stability services. The research proposes model-based approach automating testing. semi-structured specifications, like XML/HTML...
Machine learning techniques, especially deep neural networks (DNNs), have been widely adopted in various applications. However, DNNs are recently found to be vulnerable against adversarial examples, i.e., maliciously perturbed inputs that can mislead the models make arbitrary prediction errors. Empirical defenses studied, but many of them adaptively attacked again. Provable provide provable error bound DNNs, while such so far is from satisfaction. To address this issue, paper, we present our...
As adversarial attacks against machine learning models have raised increasing concerns, many denoising-based defense approaches been proposed. In this paper, we summarize and analyze the strategies in form of symmetric transformation via data denoising reconstruction (denoted as $F+$ inverse $F$, $F-IF$ Framework). particular, categorize these from three aspects (i.e. spatial domain, frequency latent space, respectively). Typically, is performed on entire example, both image perturbation are...
Federated learning provides an effective paradigm to jointly optimize a model benefited from rich distributed data while protecting privacy. Nonetheless, the heterogeneity nature of makes it challenging define and ensure fairness among local agents. For instance, is intuitively "unfair" for agents with high quality sacrifice their performance due other low data. Currently popular egalitarian weighted equity-based measures suffer aforementioned pitfall. In this work, we aim formally represent...
Accurate and reliable agricultural information is the basis of implementation digital agriculture. Currently, processing collection data acquisition more complex. It urgently needs to develop a portable system with high degree integration wide range versatility. can process in "one-stop" service. capture GPS coordinates farmland, attribute data, image information, sent monitoring immediately using 3G network or GPRS network. In this paper, The supports android IOS mobile phones, Widgets...
For large industrial applications, system test cases are still often described in natural language (NL), and their number can reach thousands. Test automation is to automatically execute the cases. Achieving typically requires substantial manual effort for creating executable scripts from these NL In particular, given that each case consists of a sequence steps, testers first implement API method step then write script invoking methods sequentially automation. Across different cases,...
Adversarial Transferability is an intriguing property - adversarial perturbation crafted against one model also effective another model, while these models are from different families or training processes. To better protect ML systems attacks, several questions raised: what the sufficient conditions for transferability and how to bound it? Is there a way reduce in order improve robustness of ensemble model? answer questions, this work we first theoretically analyze outline between models;...
Recent studies show that deep neural networks (DNN) are vulnerable to adversarial examples, which aim mislead DNNs by adding perturbations with small magnitude. To defend against such attacks, both empirical and theoretical defense approaches have been extensively studied for a single ML model. In this work, we analyze provide the certified robustness ensemble models, together sufficient necessary conditions of different protocols. Although models shown more robust than model empirically;...
Entity relation extraction is an important task in the construction of professional knowledge graphs medical field. Research on entity for academic books field has revealed that there a great difference number different relations, which led to formation typical unbalanced data set difficult recognize but certain research value.In this article, we propose new method based augmentation. According distribution individual classes set, probability whether text augmented during training was...
Despite great recent advances achieved by deep neural networks (DNNs), they are often vulnerable to adversarial attacks. Intensive research efforts have been made improve the robustness of DNNs; however, most empirical defenses can be adaptively attacked again, and theoretically certified is limited, especially on large-scale datasets. One potential root cause such vulnerabilities for DNNs that although demonstrated powerful expressiveness, lack reasoning ability make robust reliable...
With the widespread deployment of deep neural networks (DNNs), ensuring reliability DNN-based systems is great importance. Serious issues such as system failures can be caused by numerical defects, one most frequent defects in DNNs. To assure high against this paper, we propose RANUM approach including novel techniques for three assurance tasks: detection potential confirmation potential-defect feasibility, and suggestion defect fixes. best our knowledge, first that confirms feasibility with...
Intensive algorithmic efforts have been made to enable the rapid improvements of certificated robustness for complex ML models recently. However, current certification methods are only able certify under a limited perturbation radius. Given that existing pure data-driven statistical approaches reached bottleneck, in this paper, we propose integrate with knowledge (expressed as logical rules) reasoning component using Markov logic networks (MLN, so further improve overall certified...
Automatic machine learning, or AutoML, holds the promise of truly democratizing use learning (ML), by substantially automating work data scientists. However, huge combinatorial search space candidate pipelines means that current AutoML techniques, generate sub-optimal pipelines, none at all, especially on large, complex datasets. In this we propose an technique SapientML, can learn from a corpus existing datasets and their human-written efficiently high-quality pipeline for predictive task...
Conformal prediction has shown spurring performance in constructing statistically rigorous sets for arbitrary black-box machine learning models, assuming the data is exchangeable. However, even small adversarial perturbations during inference can violate exchangeability assumption, challenge coverage guarantees, and result a subsequent decline empirical coverage. In this work, we propose certifiably robust learning-reasoning conformal framework (COLEP) via probabilistic circuits, which...
Recent advancements in building domain-specific large language models (LLMs) have shown remarkable success, especially tasks requiring reasoning abilities like logical inference over complex relationships and multi-step problem solving. However, creating a powerful all-in-one LLM remains challenging due to the need for proprietary data vast computational resources. As resource-friendly alternative, we explore potential of merging multiple expert into single LLM. Existing studies on model...
Gradient estimation and vector space projection have been studied as two distinct topics. We aim to bridge the gap between by investigating how efficiently estimate gradient based on a projected low-dimensional space. first provide lower upper bounds for under both linear nonlinear projections, outline checkable sufficient conditions which one is better than other. Moreover, we analyze query complexity projection-based present condition query-efficient estimators. Built upon our theoretic...