- Quantum and electron transport phenomena
- Quantum Computing Algorithms and Architecture
- Quantum Information and Cryptography
- Physics of Superconductivity and Magnetism
- Surface and Thin Film Phenomena
- Quantum-Dot Cellular Automata
- Quantum Mechanics and Applications
- Neural Networks and Reservoir Computing
- Atomic and Subatomic Physics Research
- Graphene research and applications
- Semiconductor Quantum Structures and Devices
- Fault Detection and Control Systems
- Advanced Electrical Measurement Techniques
- High-Energy Particle Collisions Research
- Dark Matter and Cosmic Phenomena
- Neural Networks and Applications
- 2D Materials and Applications
- Numerical Methods and Algorithms
- Photonic and Optical Devices
- Spectroscopy and Quantum Chemical Studies
- Optical Network Technologies
- Advanced Thermodynamics and Statistical Mechanics
- Advanced Measurement and Metrology Techniques
- Structural Health Monitoring Techniques
- Particle physics theoretical and experimental studies
University of Rochester
2023
Durham University
2023
Pacific Standard
2023
University of California, San Diego
2023
MIT Lincoln Laboratory
2016-2022
Massachusetts Institute of Technology
2016-2022
Williams College
2014
The scalable application of quantum information science will stand on reproducible and controllable high-coherence bits (qubits). Here, we revisit the design fabrication superconducting flux qubit, achieving a planar device with broad frequency tunability, strong anharmonicity, high reproducibility, relaxation times in excess $40\,\mu$s at its flux-insensitive point. Qubit $T_1$ across 22 qubits are consistently matched single model involving resonator loss, ohmic charge noise, 1/f noise...
As the field of superconducting quantum computing advances from few-qubit stage to larger-scale processors, qubit addressability and extensibility will necessitate use 3D integration packaging. While is well-developed for commercial electronics, relatively little work has been performed determine its compatibility with high-coherence solid-state qubits. Of particular concern, coherence times can be suppressed by requisite processing steps close proximity another chip. In this work, we a...
High-fidelity two-qubit gates at scale are a key requirement to realize the full promise of quantum computation and simulation. The advent use coupler elements tunably control interactions has improved operational fidelity in many-qubit systems by reducing parasitic coupling frequency crowding issues. Nonetheless, gate errors still limit capability near-term applications. reason, part, is existing framework for tunable couplers based on dispersive approximation does not fully incorporate...
Quantum annealing is an optimization technique which potentially leverages quantum tunneling to enhance computational performance. Existing annealers use superconducting flux qubits with short coherence times, limited primarily by the of large persistent currents $I_\mathrm{p}$. Here, we examine alternative approach, using smaller $I_\mathrm{p}$ and longer times. We demonstrate tunable coupling, a basic building block for annealing, between two small ($\sim 50~\mathrm{nA}$) currents....
Dynamical error suppression techniques are commonly used to improve coherence in quantum systems. They reduce dephasing errors by applying control pulses designed reverse erroneous coherent evolution driven environmental noise. However, such methods cannot correct for irreversible processes as energy relaxation. In this work, we investigate a complementary, stochastic approach reducing errors: instead of deterministically reversing the unwanted qubit evolution, use shape noise environment...
As progress is made towards the first generation of error-corrected quantum computers, robust characterization and validation protocols are required to assess noise environments physical processors. While standard coherence metrics such as T1 T2, process tomography, randomized benchmarking now ubiquitous, these techniques provide only partial information about dynamic multi-qubit loss channels responsible for processor errors, which can be described more fully by a Lindblad operator in...
We consider LHC searches for dilepton resonances in an intermediate mass range, $\sim 10 -80$ GeV. adopt a kinetically mixed $Z'$ as example of weakly coupled new physics that might have evaded detection at previous experiments but which could still be probed by spectrum measurements this range. Based on Monte Carlo simulations, we estimate existing data from the 7 and 8 TeV used to test values kinetic mixing parameter $\epsilon$ several times smaller than precision electroweak upper bounds,...
The equivalence between the instructions used to define programs and input data on which operate is a basic principle of classical computer architectures programming. Replacing with quantum states enables fundamentally new computational capabilities scaling advantages for many applications, numerous models have been proposed realizing computation. However, within each these models, are transformed by set gates that compiled using solely information. Conventional computing thus break...
Density matrix exponentiation may offer a natively quantum approach to programming computers. A new experiment presents the first demonstration of protocol in superconducting processor.
As progress is made towards the first generation of error-corrected quantum computers, robust characterization and validation protocols are required to assess noise environments physical processors. While standard coherence metrics such as T1 T2, process tomography, randomized benchmarking now ubiquitous, these techniques provide only partial information about dynamic multi-qubit loss channels responsible for processor errors, which can be described more fully by a Lindblad operator in...