- Quantum Computing Algorithms and Architecture
- Quantum Information and Cryptography
- Neural Networks and Reservoir Computing
- Quantum Mechanics and Applications
- Neural Networks and Applications
- Computability, Logic, AI Algorithms
- Quantum and electron transport phenomena
- Quantum-Dot Cellular Automata
- Reliability and Maintenance Optimization
- Machine Learning in Materials Science
- Social Media and Politics
- Computational and Text Analysis Methods
- Hate Speech and Cyberbullying Detection
- Industrial Vision Systems and Defect Detection
- Statistical Mechanics and Entropy
- Misinformation and Its Impacts
- Parallel Computing and Optimization Techniques
- Quantum many-body systems
- Stochastic Gradient Optimization Techniques
- COVID-19 epidemiological studies
- Machine Learning and Data Classification
- Peacebuilding and International Security
- Virology and Viral Diseases
- Engineering Applied Research
- Machine Learning and Algorithms
Xanadu Quantum Technologies (Canada)
2018-2024
University of Johannesburg
2023-2024
University of KwaZulu-Natal
2013-2023
University of Portland
2021
Flatiron Health (United States)
2019
National Institute for Theoretical Physics
2014-2018
Freie Universität Berlin
2013
In October 2018 an APS Physics Next Workshop on Machine Learning was held in Riverhead, NY. This article reviews and summarizes the proceedings of this very broad, emerging field.This needs to be a placard left-hand column, with custom tag.
A basic idea of quantum computing is surprisingly similar to that kernel methods in machine learning, namely, efficiently perform computations an intractably large Hilbert space. In this Letter we explore some theoretical foundations link and show how it opens up a new avenue for the design learning algorithms. We interpret process encoding inputs state as nonlinear feature map maps data computer can now analyze input Based on link, discuss two approaches building model classification. first...
An important application for near-term quantum computing lies in optimization tasks, with applications ranging from chemistry and drug discovery to machine learning. In many settings --- most prominently so-called parametrized or variational algorithms the objective function is a result of hybrid quantum-classical processing. To optimize objective, it useful have access exact gradients circuits respect gate parameters. This paper shows how expectation values measurements can be estimated...
A machine learning design is developed to train a quantum circuit specialized in solving classification problem. In addition discussing the training method and effect of noise, it shown that circuits perform reasonably well on classical benchmarks.
PennyLane is a Python 3 software framework for differentiable programming of quantum computers. The library provides unified architecture near-term computing devices, supporting both qubit and continuous-variable paradigms. PennyLane's core feature the ability to compute gradients variational circuits in way that compatible with classical techniques such as backpropagation. thus extends automatic differentiation algorithms common optimization machine learning include hybrid computations. A...
Machine learning algorithms learn a desired input-output relation from examples in order to interpret new inputs. This is important for tasks such as image and speech recognition or strategy optimisation, with growing applications the IT industry. In last couple of years, researchers investigated if quantum computing can help improve classical machine algorithms. Ideas range running computationally costly their subroutines efficiently on computer translation stochastic methods into language...
Quantum computers can be used for supervised learning by treating parametrized quantum circuits as models that map data inputs to predictions. While a lot of work has been done investigate the practical implications this approach, many important theoretical properties these remain unknown. Here, we how strategy with which are encoded into model influences expressive power function approximators. We show one naturally write partial Fourier series in data, where accessible frequencies...
We introduce a general method for building neural networks on quantum computers. The network is variational circuit built in the continuous-variable (CV) architecture, which encodes information continuous degrees of freedom such as amplitudes electromagnetic field. This contains layered structure continuously parameterized gates universal CV computation. Affine transformations and nonlinear activation functions, two key elements networks, are enacted using Gaussian non-Gaussian gates,...
Lately, much attention has been given to quantum algorithms that solve pattern recognition tasks in machine learning. Many of these learning try implement classical models on large-scale universal computers have access non-trivial subroutines such as Hamiltonian simulation, amplitude amplification and phase estimation. We approach the problem from opposite direction analyse a distance-based classifier is realised by simple interference circuit. After state preparation, circuit only consists...
We give an algorithm for prediction on a quantum computer which is based linear regression model with least squares optimisation. Opposed to related previous contributions suffering from the problem of reading out optimal parameters fit, our scheme focuses machine learning task guessing output corresponding new input given examples data points. Furthermore, we adapt process non-sparse matrices that can be represented by low-rank approximations, and significantly improve dependency its...
Within the context of hybrid quantum-classical optimization, gradient descent based optimizers typically require evaluation expectation values with respect to outcome parameterized quantum circuits. In this work, we explore consequences prior observation that estimation these quantities on hardware results in a form stochastic optimization. We formalize notion, which allows us show many relevant cases, including VQE, QAOA and certain classifiers, estimating $k$ measurement outcomes...
Quantum classifiers are trainable quantum circuits used as machine learning models. The first part of the circuit implements a feature map that encodes classical inputs into states, embedding data in high-dimensional Hilbert space; second executes measurement interpreted output model. Usually, is trained to distinguish quantum-embedded data. We propose instead train -- with objective maximally separating classes space, strategy we call metric learning. As result, minimizing linear...
With near-term quantum devices available and the race for fault-tolerant computers in full swing, researchers became interested question of what happens if we replace a supervised machine learning model with circuit. While such "quantum models" are sometimes called neural networks", it has been repeatedly noted that their mathematical structure is actually much more closely related to kernel methods: they analyse data high-dimensional Hilbert spaces which only have access through inner...
Machine learning is frequently listed among the most promising applications for quantum computing. This in fact a curious choice: Today's machine algorithms are notoriously powerful practice, but remain theoretically difficult to study. Quantum computing, contrast, does not offer practical benchmarks on realistic scales, and theory main tool we have judge whether it could become relevant problem. In this perspective explain why so say something about power of computers with tools currently...
Machine learning has been used in high energy physics for a long time, primarily at the analysis level with supervised classification. Quantum computing was postulated early 1980s as way to perform computations that would not be tractable classical computer. With advent of noisy intermediate-scale quantum devices, more algorithms are being developed aim exploiting capacity hardware machine applications. An interesting question is whether there ways apply High Energy Physics. This paper...
Abstract Optimization problems in disciplines such as machine learning are commonly solved with iterative methods. Gradient descent algorithms find local minima by moving along the direction of steepest while Newton’s method takes into account curvature information and thereby often improves convergence. Here, we develop quantum versions these optimization apply them to polynomial a unit norm constraint. In each step, multiple copies current candidate used improve using phase estimation, an...