Steven R. Young

ORCID: 0000-0003-0591-4330
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Neural Networks and Applications
  • Advanced Memory and Neural Computing
  • Advanced Neural Network Applications
  • Machine Learning and Data Classification
  • Ferroelectric and Negative Capacitance Devices
  • Neural Networks and Reservoir Computing
  • Neutrino Physics Research
  • Particle physics theoretical and experimental studies
  • Adversarial Robustness in Machine Learning
  • Astrophysics and Cosmic Phenomena
  • Time Series Analysis and Forecasting
  • Machine Learning and Algorithms
  • Combustion and flame dynamics
  • Anomaly Detection Techniques and Applications
  • Machine Learning in Materials Science
  • Wireless Signal Modulation Classification
  • Particle Detector Development and Performance
  • Blind Source Separation Techniques
  • Catalytic Processes in Materials Science
  • Advanced Combustion Engine Technologies
  • Data Management and Algorithms
  • Data Stream Mining Techniques
  • Remote-Sensing Image Classification
  • Radar Systems and Signal Processing
  • AI in cancer detection

Oak Ridge National Laboratory
2015-2024

University of Tennessee at Knoxville
2009-2020

National Technical Information Service
2019

Office of Scientific and Technical Information
2019

Toronto Metropolitan University
2018

Knoxville College
2015

University of Idaho
2006

University of Oxford
1998

The University of Queensland
1993-1996

There has been a recent surge of success in utilizing Deep Learning (DL) imaging and speech applications for its relatively automatic feature generation and, particular convolutional neural networks (CNNs), high accuracy classification abilities. While these models learn their parameters through data-driven methods, model selection (as architecture construction) hyper-parameter choices remains tedious highly intuition driven task. To address this, Multi-node Evolutionary Neural Networks...

10.1145/2834892.2834896 article EN 2015-11-05

An analog implementation of a deep machine-learning system for efficient feature extraction is presented in this work. It features online unsupervised trainability and non-volatile floating-gate storage. utilizes massively parallel reconfigurable current-mode architecture to realize computation, leverages algorithm-level feedback provide robustness circuit imperfections signal processing. A 3-layer, 7-node engine was fabricated 0.13 μm standard CMOS process, occupying 0.36 mm 2 active area....

10.1109/jssc.2014.2356197 article EN publisher-specific-oa IEEE Journal of Solid-State Circuits 2014-10-09

Current deep learning approaches have been very successful using convolutional neural networks trained on large graphical-processing-unit-based computers. Three limitations of this approach are that (1) they based a simple layered network topology, i.e., highly connected layers, without intra-layer connections; (2) the manually configured to achieve optimal results, and (3) implementation model is expensive in both cost power. In article, we evaluate models three different computing...

10.1145/3178454 article EN ACM Journal on Emerging Technologies in Computing Systems 2018-04-30

While a large number of deep learning networks have been studied and published that produce outstanding results on natural image datasets, these datasets only make up fraction those to which can be applied. These include text data, audio arrays sensors very different characteristics than images. As "best" for images largely discovered through experimentation cannot proven optimal some theoretical basis, there is no reason believe they are the network drastically datasets. Hyperparameter...

10.1145/3146347.3146355 article EN 2017-10-31

Clustering is a pivotal building block in many data mining applications and machine learning general. Most clustering algorithms the literature pertain to off-line (or batch) processing, which process repeatedly sweeps through set of samples an attempt capture its underlying structure compact efficient way. However, recent require that algorithm be online, or incremental, there no priori but rather are provided one iteration at time. Accordingly, expected gradually improve prototype...

10.1109/itng.2010.148 article EN 2010-01-01

The pursuit of more advanced electronics, finding solutions to energy needs, and tackling a wealth social issues often hinges upon the discovery optimization new functional materials that enable disruptive technologies or applications. However, rate these is alarmingly low. Much information could drive this higher scattered across tens thousands papers in extant literature published over several decades, almost all it not collated thus cannot be used its entirety. Many limitations can...

10.1063/1.5009942 article EN publisher-specific-oa Journal of Applied Physics 2018-03-20

A consistent challenge for both new and expert practitioners of small-angle scattering (SAS) lies in determining how to analyze the data, given limited information content said data large number models that can be employed. Machine learning (ML) methods are powerful tools classifying have found diverse applications many fields science. Here, ML applied problem SAS most appropriate model use analysis. The approach employed is built around method weighted k nearest neighbors (wKNN), utilizes a...

10.1107/s1600576720000552 article EN Journal of Applied Crystallography 2020-02-18

Training deep learning networks is a difficult task due to computational complexity, and this traditionally handled by simplifying network topology enable parallel computation on graphical processing units (GPUs). However, the emergence of quantum devices allows reconsideration complex topologies. We illustrate particular that can be trained classify MNIST data (an image dataset handwritten digits) neutrino detection using restricted form adiabatic known as annealing performed D-Wave...

10.3390/e20050380 article EN cc-by Entropy 2018-05-18

An artificial intelligence system called MENNDL, which used 25,200 NVIDIA Volta GPUs on Oak Ridge National Laboratory's Summit machine, automatically designed an optimal deep learning network in order to extract structural information from raw atomic-resolution microscopy data. In a few hours, MENNDL creates and evaluates millions of networks using scalable, parallel, asynchronous genetic algorithm augmented with support vector machine find superior topology hyper-parameter set than human...

10.1109/sc.2018.00053 article EN 2018-11-01

We present a simulation-based study using deep convolutional neural networks (DCNNs) to identify neutrino interaction vertices in the MINERvA passive targets region, and illustrate application of domain adversarial (DANNs) this context. DANNs are designed be trained one (simulated data) but tested second (physics utilize unlabeled data from so that during training only features which unable discriminate between domains promoted. is neutrino-nucleus scattering experiment NuMI beamline at...

10.1088/1748-0221/13/11/p11020 article EN Journal of Instrumentation 2018-11-26

Deep convolutional neural networks (CNNs) have become extremely popular and successful at a number of machine learning tasks. One the great challenges successfully deploying CNN is designing network: specifying network topology (sequence layer types) configuring (setting all internal hyper-parameters). There are techniques which commonly used to design network. most simple (but lengthy) random search. In this paper we demonstrate how search can be dramatically improved by two-phase The first...

10.1145/3146347.3146352 article EN 2017-10-31

Novel uses of graphical processing units for accelerated computation revolutionized the field high-performance scientific computing by providing specialized workflows tailored to algorithmic requirements. As era Moore’s law draws a close, many new non–von Neumann processors are emerging as potential computational accelerators, including those based on principles neuromorphic computing, tensor algebra, and quantum information. While development these is continuing mature, impact anticipated...

10.1145/3380940 article EN ACM Transactions on Parallel Computing 2020-03-29

A constructive neural-network algorithm is presented. For any consistent classification task on real-valued training vectors, the constructs a feedforward network with single hidden layer of threshold units which implements task. The algorithm, we call CARVE, extends "sequential learning" Marchand et al. from Boolean inputs to input case, and uses convex hull methods for determination weights. an efficient scheme producing near-minimal solutions arbitrary tasks. applied number benchmark...

10.1109/72.728361 article EN IEEE Transactions on Neural Networks 1998-01-01

Just-in-time defect prediction, which is also known as change-level can be used to efficiently allocate resources and manage project schedules in the software testing debugging process. prediction reduce amount of code review simplify assignment developers bug fixes. This paper reports a replicated experiment an extension comparing defect-prone changes using traditional machine learning techniques ensemble learning. Using datasets from six open source projects, namely Bugzilla, Columba, JDT,...

10.1145/3194104.3194110 article EN 2018-05-28

Deep learning, through the use of neural networks, has demonstrated remarkable ability to automate many routine tasks when presented with sufficient data for training. The network architecture (e.g. number layers, types connections between etc.) plays a critical role in determining what, if anything, is able learn from training data. trend architectures, especially those trained on ImageNet, been grow ever deeper and more complex. result increasing accuracy benchmark datasets cost increased...

10.1109/bigdata47090.2019.9006467 article EN 2021 IEEE International Conference on Big Data (Big Data) 2019-12-01

Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train current systems. Building the of deep learning requires hand tuning, and implementing in hardware is expensive both cost power.In this paper, we evaluate using three...

10.1109/mlhpc.2016.009 preprint EN 2016-11-01

Deep learning offers new tools to improve our understanding of many important scientific problems. Neutrinos are the most abundant particles in existence and hypothesized explain matter-antimatter asymmetry that dominates universe. Definitive tests this conjecture require a detailed neutrino interactions with variety nuclei. Many measurements interest depend on vertex reconstruction — finding origin interaction using data from detector, which can be represented as images. Traditionally, has...

10.1109/ijcnn.2017.7966131 article EN 2022 International Joint Conference on Neural Networks (IJCNN) 2017-05-01

Neuromorphic computing offers one path forward for AI at the edge. However, accessing and effectively utilizing a neuromorphic hardware platform is non-trivial. In this work, we present complete pipeline edge, including small, inexpensive, low-power, FPGA-based platform, training algorithm designing spiking neural networks hardware, software framework connecting those components. We demonstrate on real-world application, engine control spark-ignition internal combustion engine. illustrate...

10.1109/igsc51522.2020.9291228 article EN 2020-10-19

Deep machine learning (DML) holds the potential to revolutionize by automating rich feature extraction, which has become primary bottleneck of human engineering in pattern recognition systems. However, heavy computational burden renders DML systems implemented on conventional digital processors impractical for large-scale problems. The highly parallel computations required implement deep are well suited custom hardware. Analog computation demonstrated power efficiency advantages multiple...

10.1109/tnnls.2013.2283730 article EN IEEE Transactions on Neural Networks and Learning Systems 2013-10-11

In this work, we apply a spiking neural network model and an associated memristive neuromorphic implementation to application in classifying temporal scientific data. We demonstrate that the achieves comparable results previously reported convolutional model, with significantly fewer neurons synapses required.

10.1145/3183584.3183612 article EN 2017-07-17

As deep neural networks have been deployed in more and applications over the past half decade are finding their way into an ever increasing number of operational systems, energy consumption becomes a concern whether running datacenter or on edge devices. Hyperparameter optimization automated network design for learning is quickly growing field, but much focus has remained only optimizing performance machine task. In this work, we demonstrate that best performing created through process...

10.1109/bigdata47090.2019.9006239 article EN 2021 IEEE International Conference on Big Data (Big Data) 2019-12-01
Coming Soon ...