- Neural Networks and Applications
- Advanced Memory and Neural Computing
- Advanced Neural Network Applications
- Machine Learning and Data Classification
- Ferroelectric and Negative Capacitance Devices
- Neural Networks and Reservoir Computing
- Neutrino Physics Research
- Particle physics theoretical and experimental studies
- Adversarial Robustness in Machine Learning
- Astrophysics and Cosmic Phenomena
- Time Series Analysis and Forecasting
- Machine Learning and Algorithms
- Combustion and flame dynamics
- Anomaly Detection Techniques and Applications
- Machine Learning in Materials Science
- Wireless Signal Modulation Classification
- Particle Detector Development and Performance
- Blind Source Separation Techniques
- Catalytic Processes in Materials Science
- Advanced Combustion Engine Technologies
- Data Management and Algorithms
- Data Stream Mining Techniques
- Remote-Sensing Image Classification
- Radar Systems and Signal Processing
- AI in cancer detection
Oak Ridge National Laboratory
2015-2024
University of Tennessee at Knoxville
2009-2020
National Technical Information Service
2019
Office of Scientific and Technical Information
2019
Toronto Metropolitan University
2018
Knoxville College
2015
University of Idaho
2006
University of Oxford
1998
The University of Queensland
1993-1996
There has been a recent surge of success in utilizing Deep Learning (DL) imaging and speech applications for its relatively automatic feature generation and, particular convolutional neural networks (CNNs), high accuracy classification abilities. While these models learn their parameters through data-driven methods, model selection (as architecture construction) hyper-parameter choices remains tedious highly intuition driven task. To address this, Multi-node Evolutionary Neural Networks...
An analog implementation of a deep machine-learning system for efficient feature extraction is presented in this work. It features online unsupervised trainability and non-volatile floating-gate storage. utilizes massively parallel reconfigurable current-mode architecture to realize computation, leverages algorithm-level feedback provide robustness circuit imperfections signal processing. A 3-layer, 7-node engine was fabricated 0.13 μm standard CMOS process, occupying 0.36 mm 2 active area....
Current deep learning approaches have been very successful using convolutional neural networks trained on large graphical-processing-unit-based computers. Three limitations of this approach are that (1) they based a simple layered network topology, i.e., highly connected layers, without intra-layer connections; (2) the manually configured to achieve optimal results, and (3) implementation model is expensive in both cost power. In article, we evaluate models three different computing...
While a large number of deep learning networks have been studied and published that produce outstanding results on natural image datasets, these datasets only make up fraction those to which can be applied. These include text data, audio arrays sensors very different characteristics than images. As "best" for images largely discovered through experimentation cannot proven optimal some theoretical basis, there is no reason believe they are the network drastically datasets. Hyperparameter...
Clustering is a pivotal building block in many data mining applications and machine learning general. Most clustering algorithms the literature pertain to off-line (or batch) processing, which process repeatedly sweeps through set of samples an attempt capture its underlying structure compact efficient way. However, recent require that algorithm be online, or incremental, there no priori but rather are provided one iteration at time. Accordingly, expected gradually improve prototype...
The pursuit of more advanced electronics, finding solutions to energy needs, and tackling a wealth social issues often hinges upon the discovery optimization new functional materials that enable disruptive technologies or applications. However, rate these is alarmingly low. Much information could drive this higher scattered across tens thousands papers in extant literature published over several decades, almost all it not collated thus cannot be used its entirety. Many limitations can...
A consistent challenge for both new and expert practitioners of small-angle scattering (SAS) lies in determining how to analyze the data, given limited information content said data large number models that can be employed. Machine learning (ML) methods are powerful tools classifying have found diverse applications many fields science. Here, ML applied problem SAS most appropriate model use analysis. The approach employed is built around method weighted k nearest neighbors (wKNN), utilizes a...
Training deep learning networks is a difficult task due to computational complexity, and this traditionally handled by simplifying network topology enable parallel computation on graphical processing units (GPUs). However, the emergence of quantum devices allows reconsideration complex topologies. We illustrate particular that can be trained classify MNIST data (an image dataset handwritten digits) neutrino detection using restricted form adiabatic known as annealing performed D-Wave...
An artificial intelligence system called MENNDL, which used 25,200 NVIDIA Volta GPUs on Oak Ridge National Laboratory's Summit machine, automatically designed an optimal deep learning network in order to extract structural information from raw atomic-resolution microscopy data. In a few hours, MENNDL creates and evaluates millions of networks using scalable, parallel, asynchronous genetic algorithm augmented with support vector machine find superior topology hyper-parameter set than human...
We present a simulation-based study using deep convolutional neural networks (DCNNs) to identify neutrino interaction vertices in the MINERvA passive targets region, and illustrate application of domain adversarial (DANNs) this context. DANNs are designed be trained one (simulated data) but tested second (physics utilize unlabeled data from so that during training only features which unable discriminate between domains promoted. is neutrino-nucleus scattering experiment NuMI beamline at...
Deep convolutional neural networks (CNNs) have become extremely popular and successful at a number of machine learning tasks. One the great challenges successfully deploying CNN is designing network: specifying network topology (sequence layer types) configuring (setting all internal hyper-parameters). There are techniques which commonly used to design network. most simple (but lengthy) random search. In this paper we demonstrate how search can be dramatically improved by two-phase The first...
Novel uses of graphical processing units for accelerated computation revolutionized the field high-performance scientific computing by providing specialized workflows tailored to algorithmic requirements. As era Moore’s law draws a close, many new non–von Neumann processors are emerging as potential computational accelerators, including those based on principles neuromorphic computing, tensor algebra, and quantum information. While development these is continuing mature, impact anticipated...
A constructive neural-network algorithm is presented. For any consistent classification task on real-valued training vectors, the constructs a feedforward network with single hidden layer of threshold units which implements task. The algorithm, we call CARVE, extends "sequential learning" Marchand et al. from Boolean inputs to input case, and uses convex hull methods for determination weights. an efficient scheme producing near-minimal solutions arbitrary tasks. applied number benchmark...
Just-in-time defect prediction, which is also known as change-level can be used to efficiently allocate resources and manage project schedules in the software testing debugging process. prediction reduce amount of code review simplify assignment developers bug fixes. This paper reports a replicated experiment an extension comparing defect-prone changes using traditional machine learning techniques ensemble learning. Using datasets from six open source projects, namely Bugzilla, Columba, JDT,...
Deep learning, through the use of neural networks, has demonstrated remarkable ability to automate many routine tasks when presented with sufficient data for training. The network architecture (e.g. number layers, types connections between etc.) plays a critical role in determining what, if anything, is able learn from training data. trend architectures, especially those trained on ImageNet, been grow ever deeper and more complex. result increasing accuracy benchmark datasets cost increased...
Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train current systems. Building the of deep learning requires hand tuning, and implementing in hardware is expensive both cost power.In this paper, we evaluate using three...
Deep learning offers new tools to improve our understanding of many important scientific problems. Neutrinos are the most abundant particles in existence and hypothesized explain matter-antimatter asymmetry that dominates universe. Definitive tests this conjecture require a detailed neutrino interactions with variety nuclei. Many measurements interest depend on vertex reconstruction — finding origin interaction using data from detector, which can be represented as images. Traditionally, has...
Neuromorphic computing offers one path forward for AI at the edge. However, accessing and effectively utilizing a neuromorphic hardware platform is non-trivial. In this work, we present complete pipeline edge, including small, inexpensive, low-power, FPGA-based platform, training algorithm designing spiking neural networks hardware, software framework connecting those components. We demonstrate on real-world application, engine control spark-ignition internal combustion engine. illustrate...
Deep machine learning (DML) holds the potential to revolutionize by automating rich feature extraction, which has become primary bottleneck of human engineering in pattern recognition systems. However, heavy computational burden renders DML systems implemented on conventional digital processors impractical for large-scale problems. The highly parallel computations required implement deep are well suited custom hardware. Analog computation demonstrated power efficiency advantages multiple...
In this work, we apply a spiking neural network model and an associated memristive neuromorphic implementation to application in classifying temporal scientific data. We demonstrate that the achieves comparable results previously reported convolutional model, with significantly fewer neurons synapses required.
As deep neural networks have been deployed in more and applications over the past half decade are finding their way into an ever increasing number of operational systems, energy consumption becomes a concern whether running datacenter or on edge devices. Hyperparameter optimization automated network design for learning is quickly growing field, but much focus has remained only optimizing performance machine task. In this work, we demonstrate that best performing created through process...