Bobak T. Kiani

ORCID: 0000-0003-1477-0308
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Quantum Computing Algorithms and Architecture
  • Quantum Information and Cryptography
  • Neural Networks and Reservoir Computing
  • Neural Networks and Applications
  • Adversarial Robustness in Machine Learning
  • Stochastic Gradient Optimization Techniques
  • Matrix Theory and Algorithms
  • Machine Learning and Algorithms
  • Quantum and electron transport phenomena
  • Domain Adaptation and Few-Shot Learning
  • Anomaly Detection Techniques and Applications
  • Machine Learning and ELM
  • Markov Chains and Monte Carlo Methods
  • Theoretical and Computational Physics
  • Cold Atom Physics and Bose-Einstein Condensates
  • Gaussian Processes and Bayesian Inference
  • Parallel Computing and Optimization Techniques
  • Electromagnetic Scattering and Analysis
  • Tensor decomposition and applications
  • Advanced Graph Neural Networks
  • Face and Expression Recognition
  • Text and Document Classification Technologies
  • Quantum Mechanics and Applications
  • Constraint Satisfaction and Optimization
  • Advanced Data Processing Techniques

Massachusetts Institute of Technology
2018-2023

IIT@MIT
2022-2023

Moscow Institute of Thermal Technology
2021

One of the most important properties classical neural networks is how surprisingly trainable they are, though their training algorithms typically rely on optimizing complicated, nonconvex loss functions. Previous results have shown that unlike case in networks, variational quantum models are often not trainable. The studied phenomenon onset barren plateaus landscape these models, when very deep. This focus has made almost synonymous with trainability models. Here, we show only a part story....

10.1038/s41467-022-35364-5 article EN cc-by Nature Communications 2022-12-15

Quantifying how far the output of a learning algorithm is from its target an essential task in machine learning. However, quantum settings, loss landscapes commonly used distance metrics often produce undesirable outcomes such as poor local minima and exponentially decaying gradients. To overcome these obstacles, we consider here recently proposed earth mover's (EM) or Wasserstein-1 analog to classical EM distance. We show that possesses unique properties, not found other metrics, make more...

10.1088/2058-9565/ac79c9 article EN cc-by Quantum Science and Technology 2022-06-17

Quantum computers are known to provide an exponential advantage over classical for the solution of linear differential equations in high-dimensional spaces. Here, we present a quantum algorithm nonlinear equations. The provides algorithms solving Potential applications include Navier-Stokes equation, plasma hydrodynamics, epidemiology, and more.

10.48550/arxiv.2011.06571 preprint EN cc-by arXiv (Cornell University) 2020-01-01

In light of recently proposed quantum algorithms that incorporate symmetries in the hope advantage, we show with are restrictive enough, classical can efficiently emulate their counterparts given certain descriptions input. Specifically, give calculate ground states and time-evolved expectation values for permutation-invariant Hamiltonians specified symmetrized Pauli basis runtimes polynomial system size. We use tensor-network methods to transform symmetry-equivariant operators...

10.22331/q-2023-11-28-1189 article EN cc-by Quantum 2023-11-28

We study the hardness of learning unitary transformations in $U(d)$ via gradient descent on time parameters alternating operator sequences. provide numerical evidence that, despite non-convex nature loss landscape, always converges to target when sequence contains $d^2$ or more parameters. Rates convergence indicate a "computational phase transition." With less than parameters, sub-optimal solution, whereas with exponentially an optimal solution.

10.48550/arxiv.2001.11897 preprint EN other-oa arXiv (Cornell University) 2020-01-01

We consider the problem of devising suitable quantum error correction (QEC) procedures for a generic noise acting on circuit. In general, there is no analytic universal procedure to obtain encoding and unitary gates, even harder if unknown has be reconstructed. The existing rely variational algorithms (VQAs) are very difficult train since size gradient cost function decays exponentially with number qubits. address this using based Wasserstein distance order 1 ($Q{W}_{1}$). At variance other...

10.1103/physreva.108.022611 article EN Physical review. A/Physical review, A 2023-08-18

A central task in medical imaging is the reconstruction of an image or function from data collected by devices (e.g., CT, MRI, and PET scanners). We provide quantum algorithms for with exponential speedup over classical counterparts when input as a state. Since outputs our are stored states, individual pixels reconstructed images may not be efficiently accessed classically; instead, we discuss various methods to extract information using variety post-processing algorithms.

10.48550/arxiv.2004.02036 preprint EN other-oa arXiv (Cornell University) 2020-01-01

One of the most important properties classical neural networks is how surprisingly trainable they are, though their training algorithms typically rely on optimizing complicated, nonconvex loss functions. Previous results have shown that unlike case in networks, variational quantum models are often not trainable. The studied phenomenon onset barren plateaus landscape these models, when very deep. This focus has made almost synonymous with trainability models. Here, we show only a part story....

10.48550/arxiv.2205.05786 preprint EN other-oa arXiv (Cornell University) 2022-01-01

Quantum algorithms for differential equation solving, data processing, and machine learning potentially offer an exponential speedup over all known classical algorithms. However, there also exist obstacles to obtaining this potential in useful problem instances. The essential obstacle quantum solving is that outputting information may require difficult postprocessing, the processing inputting a task just by itself. In study, we demonstrate that, when combined, these difficulties solve one...

10.1103/physreva.105.022415 article EN Physical review. A/Physical review, A 2022-02-14

In learning with recurrent or very deep feed-forward networks, employing unitary matrices in each layer can be effective at maintaining long-range stability. However, restricting network parameters to typically comes the cost of expensive parameterizations increased training runtime. We propose instead an efficient method based on rank-$k$ updates -- their approximation that maintains performance a nearly optimal introduce two variants this method, named Direct (projUNN-D) and Tangent...

10.48550/arxiv.2203.05483 preprint EN other-oa arXiv (Cornell University) 2022-01-01

We prove that the binary classifiers of bit strings generated by random wide deep neural networks with ReLU activation function are biased towards simple functions. The simplicity is captured following two properties. For any given input string, average Hamming distance closest string a different classification at least sqrt(n / (2π log n)), where n length string. Moreover, if bits initial flipped randomly, number flips required to change grows linearly n. These results confirmed numerical...

10.48550/arxiv.1812.10156 preprint EN other-oa arXiv (Cornell University) 2018-01-01

Quantum machine learning has the potential to provide powerful algorithms for artificial intelligence. The pursuit of quantum advantage in is an active area research. For current noisy intermediate-scale computers, various quantum-classical hybrid have been proposed. One such previously proposed algorithm a gate-based variational embedding classifier, which composed classical neural network and parameterized circuit. We propose classifier based on analog computer, where control signals vary...

10.1103/physrevapplied.19.054023 article EN Physical Review Applied 2023-05-05

We consider the problem of estimating ground state energy quantum $p$-local spin glass random Hamiltonians, analogues widely studied classical models. Our main result shows that maximum achievable by product states has a well-defined limit (for even $p$) as $n\to\infty$ and is $E_{\text{product}}^\ast=\sqrt{2 \log p}$ in large $p$. This value interpreted maximal much simpler so-called Random Energy Model, setting glasses. The proof existing follows from an extension Fekete's Lemma after we...

10.48550/arxiv.2404.07231 preprint EN arXiv (Cornell University) 2024-04-03

Group convolutions and cross-correlations, which are equivariant to the actions of group elements, commonly used in mathematics analyze or take advantage symmetries inherent a given problem setting. Here, we provide efficient quantum algorithms for performing linear cross-correlations on data stored as states. Runtimes our logarithmic dimension thus offering an exponential speedup compared classical when input is provided state operations well conditioned. Motivated by rich literature...

10.1103/physreva.106.032402 article EN Physical review. A/Physical review, A 2022-09-01

The manifold hypothesis presumes that high-dimensional data lies on or near a low-dimensional manifold. While the utility of encoding geometric structure has been demonstrated empirically, rigorous analysis its impact learnability neural networks is largely missing. Several recent results have established hardness for learning feedforward and equivariant under i.i.d. Gaussian uniform Boolean distributions. In this paper, we investigate hypothesis. We ask which minimal assumptions curvature...

10.48550/arxiv.2406.01461 preprint EN arXiv (Cornell University) 2024-06-03

The polar decomposition for a matrix $A$ is $A=UB$, where $B$ positive Hermitian and $U$ unitary (or, if not square, an isometry). This paper shows that the ability to apply Hamiltonian $\pmatrix{ 0 & A^\dagger \cr A \cr} $ translates into perform transformations $e^{-iBt}$ in deterministic fashion. We show how use quantum algorithm solve Procrustes problem, pretty good measurements, find closest any Hamiltonian, version of singular value transformation.

10.48550/arxiv.2006.00841 preprint EN other-oa arXiv (Cornell University) 2020-01-01

Many quantum algorithms for numerical linear algebra assume black-box access to a block-encoding of the matrix interest, which is strong assumption when not sparse. Kernel matrices, arise from discretizing kernel function $k(x,x')$, have variety applications in mathematics and engineering. They are generally dense full-rank. Classically, celebrated fast multipole method performs multiplication on matrices dimension $N$ time almost by using algebraic framework hierarchical matrices. In light...

10.22331/q-2022-12-13-876 article EN cc-by Quantum 2022-12-13

The quantum singular value transformation is a powerful algorithm that allows one to apply polynomial the values of matrix embedded as block unitary transformation. This paper shows how perform for can be Hamiltonian. implemented in purely Hamiltonian context by alternating application Hamiltonians chosen intervals: it an example Quantum Alternating Operator Ansatz (generalized QAOA). We also show use inverse encoding implement which given block. Inverse leads novel procedures multiplication...

10.48550/arxiv.2104.01410 preprint EN cc-by arXiv (Cornell University) 2021-01-01

Self-supervised learning (SSL) has emerged as a powerful framework to learn representations from raw data without supervision. Yet in practice, engineers face issues such instability tuning optimizers and collapse of during training. Such challenges motivate the need for theory shed light on complex interplay between choice augmentation, network architecture, training algorithm. We study an with precise analysis generalization performance both pretraining downstream tasks friendly setup,...

10.48550/arxiv.2302.02774 preprint EN cc-by-nc-sa arXiv (Cornell University) 2023-01-01

We study the problem of learning equivariant neural networks via gradient descent. The incorporation known symmetries ("equivariance") into nets has empirically improved performance pipelines, in domains ranging from biology to computer vision. However, a rich yet separate line theoretic research demonstrated that actually shallow, fully-connected (i.e. non-symmetric) exponential complexity correlational statistical query (CSQ) model, framework encompassing In this work, we ask: are...

10.48550/arxiv.2401.01869 preprint EN cc-by arXiv (Cornell University) 2024-01-01

Random spin systems at low temperatures are glassy and feature computational hardness in finding low-energy states. We study the random all-to-all interacting fermionic Sachdev--Ye--Kitaev (SYK) model prove that, contrast, (I) states have polynomial circuit depth, yet (II) annealed quenched free energies agree to inverse-polynomially temperatures, ruling out a phase transition this sense. These results derived by showing that significantly differ their \emph{commutation index}, which...

10.48550/arxiv.2408.15699 preprint EN arXiv (Cornell University) 2024-08-28
Coming Soon ...