Lianke Qin

ORCID: 0000-0002-1259-7137
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Advanced Neural Network Applications
  • Neural Networks and Applications
  • Machine Learning and Algorithms
  • Adversarial Robustness in Machine Learning
  • Stochastic Gradient Optimization Techniques
  • Domain Adaptation and Few-Shot Learning
  • Advanced Image and Video Retrieval Techniques
  • Privacy-Preserving Technologies in Data
  • Machine Learning and ELM
  • Cryptography and Data Security
  • Complexity and Algorithms in Graphs
  • Quantum Information and Cryptography
  • Advanced Graph Neural Networks
  • Handwritten Text Recognition Techniques
  • Brain Tumor Detection and Classification
  • Algorithms and Data Compression
  • Sparse and Compressive Sensing Techniques
  • Model Reduction and Neural Networks
  • Bioinformatics and Genomic Networks
  • Gene expression and cancer classification
  • DNA and Biological Computing
  • Explainable Artificial Intelligence (XAI)

University of California, Santa Barbara
2023

Matrix sensing has many real-world applications in science and engineering, such as system control, distance embedding, computer vision. The goal of matrix is to recover a $A_\star \in \mathbb{R}^{n \times n}$, based on sequence measurements $(u_i,b_i) \mathbb{R}^{n} \mathbb{R}$ that $u_i^\top A_\star u_i = b_i$. Previous work [ZJD15] focused the scenario where $A_{\star}$ small rank, e.g. rank-$k$. Their analysis heavily relies RIP assumption, making it unclear how generalize high-rank...

10.48550/arxiv.2303.12298 preprint EN cc-by arXiv (Cornell University) 2023-01-01

Deep learning has been widely used in many fields, but the model training process usually consumes massive computational resources and time. Therefore, designing an efficient neural network method with a provable convergence guarantee is fundamental important research question. In this paper, we present static half-space report data structure that consists of fully connected two-layer for shifted ReLU activation to enable activated neuron identification sublinear time via geometric search....

10.48550/arxiv.2307.06565 preprint EN cc-by arXiv (Cornell University) 2023-01-01

A rising trend in theoretical deep learning is to understand why works through Neural Tangent Kernel (NTK) [jgh18], a kernel method that equivalent using gradient descent train multi-layer infinitely-wide neural network. NTK major step forward the because it allows researchers use traditional mathematical tools analyze properties of networks and explain various network techniques from view. natural extension on graph \textit{Graph (GNTK)}, have already provide GNTK formulation for...

10.48550/arxiv.2309.07452 preprint EN cc-by arXiv (Cornell University) 2023-01-01

In this paper, we consider a heavy inner product identification problem, which generalizes the Light Bulb problem ([1]): Given two sets $A \subset\{-1,+1\}^{d}$ and $B with $|A|=|B|=n$, if there are exact k pairs whose passes certain threshold, i.e., $\{\left(a_{1}, b_{1}\right), \cdots,\left(a_{k}, b_{k}\right)\} \subset A \times B$ such that $\forall i \in[k],\left\langle a_{i}, b_{i}\right\rangle \geq \rho \cdot d$, for threshold $\rho \in(0,1)$, goal is to identify those products. We...

10.1109/bigdata59044.2023.10386943 article EN 2021 IEEE International Conference on Big Data (Big Data) 2023-12-15

Adversarial training is a widely used strategy for making neural networks resistant to adversarial perturbations. For network of width $m$, $n$ input data in $d$ dimension, it takes $\Omega(mnd)$ time cost per iteration the forward and backward computation. In this paper we analyze convergence guarantee procedure on two-layer with shifted ReLU activation, shows that only $o(m)$ neurons will be activated each iteration. Furthermore, develop an algorithm $o(m n d)$ by applying half-space...

10.48550/arxiv.2208.05395 preprint EN cc-by-nc-sa arXiv (Cornell University) 2022-01-01

There has been a recent effort in applying differential privacy on memory access patterns to enhance data privacy. This is called obliviousness. Differential obliviousness promising direction because it provides principled trade-off between performance and desired level of To date, still an open question whether can speed up database processing with respect full In this paper, we present the design implementation three new major operators: selection projection, grouping aggregation, foreign...

10.48550/arxiv.2212.05176 preprint EN cc-by arXiv (Cornell University) 2022-01-01

Submodular functions have many real-world applications, such as document summarization, sensor placement, and image segmentation. For all these the key building block is how to compute maximum value of a submodular function efficiently. We consider both online offline versions problem: in each iteration, data set changes incrementally or not changed, user can issue query maximize on given subset data. The be malicious, issuing queries based previous results break competitive ratio for...

10.48550/arxiv.2305.08367 preprint EN cc-by arXiv (Cornell University) 2023-01-01

In this paper, we consider a heavy inner product identification problem, which generalizes the Light Bulb problem~(\cite{prr89}): Given two sets $A \subset \{-1,+1\}^d$ and $B with $|A|=|B| = n$, if there are exact $k$ pairs whose passes certain threshold, i.e., $\{(a_1, b_1), \cdots, (a_k, b_k)\} A \times B$ such that $\forall i \in [k], \langle a_i,b_i \rangle \geq \rho \cdot d$, for threshold $\rho (0,1)$, goal is to identify those products. We provide an algorithm runs in $O(n^{2 \omega...

10.48550/arxiv.2311.11429 preprint EN cc-by arXiv (Cornell University) 2023-01-01

In this paper, we propose Adam-Hash: an adaptive and dynamic multi-resolution hashing data-structure for fast pairwise summation estimation. Given a data-set $X \subset \mathbb{R}^d$, binary function $f:\mathbb{R}^d\times \mathbb{R}^d\to \mathbb{R}$, point $y \in the Pairwise Summation Estimate $\mathrm{PSE}_X(y) := \frac{1}{|X|} \sum_{x X} f(x,y)$. For any given $X$, need to design such that query approximately estimates $\mathrm{PSE}_X(y)$ in time is sub-linear $|X|$. Prior works on...

10.48550/arxiv.2212.11408 preprint EN cc-by arXiv (Cornell University) 2022-01-01
Coming Soon ...