Shiyu Liang

ORCID: 0009-0003-2917-2033
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Neural Networks and Applications
  • Stochastic Gradient Optimization Techniques
  • Advanced Neural Network Applications
  • Adversarial Robustness in Machine Learning
  • Machine Learning and ELM
  • Neutrophil, Myeloperoxidase and Oxidative Mechanisms
  • Advanced Graph Neural Networks
  • Data Management and Algorithms
  • Model Reduction and Neural Networks
  • Machine Learning and Algorithms
  • Sparse and Compressive Sensing Techniques
  • Complex Network Analysis Techniques
  • Immune Response and Inflammation
  • Engineering Education and Technology
  • Water-Energy-Food Nexus Studies
  • Enzyme Catalysis and Immobilization
  • Machine Learning and Data Classification
  • Economic and Technological Developments in Russia
  • Microbial Metabolic Engineering and Bioproduction
  • Scientific Research and Philosophical Inquiry
  • Advanced Clustering Algorithms Research
  • Anomaly Detection Techniques and Applications
  • Graph Theory and Algorithms
  • Bayesian Modeling and Causal Inference
  • Nitric Oxide and Endothelin Effects

Shanghai Jiao Tong University
2014-2025

Dalian Institute of Chemical Physics
2023-2025

Chinese Academy of Sciences
2025

Dalian University
2025

Dalian University of Technology
2025

Chinese University of Hong Kong, Shenzhen
2022-2023

Chinese University of Hong Kong
2023

Institute of Process Engineering
2023

University of Chinese Academy of Sciences
2023

Northwest University
2023

We consider the problem of detecting out-of-distribution images in neural networks. propose ODIN, a simple and effective method that does not require any change to pre-trained network. Our is based on observation using temperature scaling adding small perturbations input can separate softmax score distributions between in- images, allowing for more detection. show series experiments ODIN compatible with diverse network architectures datasets. It consistently outperforms baseline approach by...

10.48550/arxiv.1706.02690 preprint EN other-oa arXiv (Cornell University) 2017-01-01

Recently there has been much interest in understanding why deep neural networks are preferred to shallow networks. We show that, for a large class of piecewise smooth functions, the number neurons needed by network approximate function is exponentially larger than corresponding given degree approximation. First, we consider univariate functions on bounded interval and require achieve an approximation error $\varepsilon$ uniformly over interval. that (i.e., whose depth does not depend...

10.48550/arxiv.1610.04161 preprint EN public-domain arXiv (Cornell University) 2016-01-01

One of the major concerns for neural network training is that nonconvexity associated loss functions may cause a bad landscape. The recent success networks suggests their landscape not too bad, but what specific results do we know about landscape? In this article, review findings and on global networks.

10.1109/msp.2020.3004124 article EN IEEE Signal Processing Magazine 2020-09-01

Abstract Neuronal injury, aging, and cerebrovascular neurodegenerative diseases such as cerebral infarction, Alzheimer's disease, Parkinson's frontotemporal dementia, amyotrophic lateral sclerosis, Huntington's disease are characterized by significant neuronal loss. Unfortunately, the neurons of most mammals including humans do not possess ability to self-regenerate. Replenishment lost becomes an appealing therapeutic strategy reverse phenotype. Transplantation pluripotent neural stem cells...

10.4103/1673-5374.386400 article EN cc-by-nc-sa Neural Regeneration Research 2023-10-02

Microbial lipid extraction is a critical process in the production of biofuels and other valuable chemicals from oleaginous microorganisms. The involves separation lipids microbial cells. Given complexity cell walls demand for efficient environmentally friendly methods, further research still needed this area. This study aims to pursue intracellular yeasts using inexpensive solvents, without disrupting cells even maintaining certain level viability. used fresh fermentation broth Rhodotorula...

10.1186/s13068-025-02655-0 article EN cc-by-nc-nd Biotechnology for Biofuels and Bioproducts 2025-05-12

It is widely conjectured that the reason training algorithms for neural networks are successful because all local minima lead to similar performance, example, see (LeCun et al., 2015, Choromanska Dauphin 2014). Performance typically measured in terms of two metrics: performance and generalization performance. Here we focus on single-layered binary classification, provide conditions under which error zero at a smooth hinge loss function. Our roughly following form: neurons have be strictly...

10.48550/arxiv.1803.00909 preprint EN other-oa arXiv (Cornell University) 2018-01-01

One of the main difficulties in analyzing neural networks is non-convexity loss function which may have many bad local minima. In this paper, we study landscape for binary classification tasks. Under mild assumptions, prove that after adding one special neuron with a skip connection to output, or per layer, every minimum global minimum.

10.48550/arxiv.1805.08671 preprint EN other-oa arXiv (Cornell University) 2018-01-01

Abstract Lipids produced by oleaginous yeasts are considered as sustainable sources for the production of biofuels and oleochemicals. The red yeast Rhodosporidium toruloides can accumulate lipids to over 70% its dry cell mass. To facilitate lipid extraction, a recombinant β-1,3-glucomannanase, MAN5C, has been applied partially breakdown R. wall. In this study, NP11 was engineered secretory expression MAN5C simplify extraction process. Specifically, cassette contained codon-optimized gene...

10.1186/s40643-023-00639-2 article EN cc-by Bioresources and Bioprocessing 2023-03-02

Traditional landscape analysis of deep neural networks aims to show that no sub-optimal local minima exist in some appropriate sense. From this, one may be tempted conclude descent algorithms which escape saddle points will reach a good minimum. However, basic optimization theory tell us it is also possible for algorithm diverge infinity if there are paths leading infinity, along the loss function decreases. It not clear whether non-linear exists setting bad local-min and decreasing can...

10.48550/arxiv.1912.13472 preprint EN other-oa arXiv (Cornell University) 2019-01-01

The exponential growth of scientific literature requires effective management and extraction valuable insights. While existing search engines excel at delivering results based on relational databases, they often neglect the analysis collaborations between entities evolution ideas, as well in-depth content within publications. representation heterogeneous graphs measurement, analysis, mining such pose significant challenges. To address these challenges, we present AceMap, an academic system...

10.48550/arxiv.2403.02576 preprint EN arXiv (Cornell University) 2024-03-04

Graph Neural Network (GNN) has demonstrated extraordinary performance in classifying graph properties. However, due to the selection bias of training and testing data (e.g., on small graphs large graphs, or dense sparse graphs), distribution deviation is widespread. More importantly, we often observe <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">hybrid structure shift</i> both scale density, despite one-sided biased partition. The spurious...

10.1109/tkde.2024.3393109 article EN IEEE Transactions on Knowledge and Data Engineering 2024-05-02

NADPH oxidase 1 (NOX1) is primarily expressed in epithelial cells and responsible for local generation of reactive oxygen species (ROS). By specifically manipulating the redox microenvironment, NOX1 actively engages immunity, especially colorectal pulmonary epithelia. To unravel structural basis engaged immune processes, a predicted structure model was established using RaptorX deep learning models. The illustrates 6-transmembrane domain structure, FAD binding domain, an binding/NOXO1...

10.1371/journal.pone.0285206 article EN cc-by PLoS ONE 2023-05-03

In recent years, there has been a wide range of applications crowdsensing in mobile social networks and vehicle networks. As centralized learning methods lead to unreliabitlity data collection, high cost central server, concern privacy, one important problem is how carry out an accurate distributed process estimate parameters unknown model crowdsensing. Motivated by this, we present the design, analysis, evaluation FINE, framework for incomplete-data non-smooth estimation. Our devoted...

10.1109/tnet.2018.2814779 article EN IEEE/ACM Transactions on Networking 2018-04-13

In this paper, we consider gradient descent on a regularized loss function for training an overparametrized neural network. We model the algorithm as ODE and show how overparameterization regularization work together to provide right tradeoff between generalization errors.

10.1109/cdc42340.2020.9304386 article EN 2021 60th IEEE Conference on Decision and Control (CDC) 2020-12-14

Graph pooling refers to the operation that maps a set of node representations into compact form for graph-level representation learning. However, existing graph methods are limited by power Weisfeiler–Lehman (WL) test in performance discrimination. In addition, these often suffer from hard adaptability hyper-parameters and training instability. To address issues, we propose Hi-PART, simple yet effective neural network (GNN) framework with Hi erarchical Par tition T ree (HPT). HPT, each layer...

10.1145/3636429 article EN ACM Transactions on Knowledge Discovery from Data 2023-12-14
Coming Soon ...