- Evolutionary Algorithms and Applications
- Metaheuristic Optimization Algorithms Research
- Machine Learning and Data Classification
- Machine Learning and ELM
- Domain Adaptation and Few-Shot Learning
- Neural Networks and Applications
- Robotics and Sensor-Based Localization
- Reinforcement Learning in Robotics
- Artificial Intelligence in Healthcare and Education
- Cell Image Analysis Techniques
- Explainable Artificial Intelligence (XAI)
- Advanced Multi-Objective Optimization Algorithms
- Indoor and Outdoor Localization Technologies
- Adversarial Robustness in Machine Learning
- Data Stream Mining Techniques
- Machine Learning in Healthcare
- Computational Physics and Python Applications
- Advanced Image and Video Retrieval Techniques
- Simulation Techniques and Applications
- Autonomous Vehicle Technology and Safety
- Sports Dynamics and Biomechanics
- Mental Health Research Topics
- Advanced Optimization Algorithms Research
- Text Readability and Simplification
- Anomaly Detection Techniques and Applications
University of California, Irvine
2024
Stanford Medicine
2021
Stanford University
2021
Cognizant (United States)
2020-2021
The University of Texas at Austin
2015-2019
Sentient Science (United States)
2018
University of California, Berkeley
2013-2015
Multitask learning, i.e. learning several tasks at once with the same neural network, can improve performance in each of tasks. Designing deep network architectures for multitask is a challenge: There are many ways to tie together, and design choices matter. The size complexity this problem exceeds human ability, making it compelling domain evolutionary optimization. Using existing state art soft ordering architecture as starting point, methods evolving modules overall topology or routing...
Image based localization is an important problem with many applications. In our previous work, we presented a two step pipeline for performing image of mobile devices in outdoor environments. the first step, query matched against georeferenced 3D database to retrieve "closest" image. second pose recovered respect using cell phone sensors. As such, key ingredient database. this paper, extend approach indoors by utilizing locally referenced generated ambulatory depth acquisition backpack that...
Deep neural networks (DNNs) have produced state-of-the-art results in many benchmarks and problem domains. However, the success of DNNs depends on proper configuration its architecture hyperparameters. Such a is difficult as result, are often not used to their full potential. In addition, commercial applications need satisfy real-world design constraints such size or number parameters. To make easier, automatic machine learning (AutoML) systems for deep been developed, focusing mostly optimization
Most optimization algorithms must undergo time consuming parameter adaptation in order to optimally solve complex, real-world control tasks. Parameter is inherently a bilevel problem where the lower level objective function performance of parameters discovered by an algorithm and upper given its parametrization. In this paper, novel method called MetaEvolutionary Algorithm (MEA) presented shown be capable efficiently discovering optimal for neuroevolution problems. two challenging examples,...
Image-based localization has important commercial applications such as augmented reality and customer analytics. In prior work, we developed a three step pipeline for image-based of mobile devices in indoor environments. the first step, generate 2.5D georeferenced image database using an ambulatory backpack-mounted system originally 3D modeling Specifically, create dense point cloud polygonal model from side laser scanner measurements backpack, then use it to depthmaps by raytracing model....
Directed network motifs are the building blocks of complex networks, such as human brain and capture deep connectivity information that is not contained in standard measures. In this paper we present first application directed vivo to utilizing recently developed progression networks which built upon rates cortical thickness changes between regions. This contrast previous studies have relied on simulations vitro analysis non-human brains. We show frequencies specific can be used distinguish...
Metalearning of deep neural network (DNN) architectures and hyperparameters has become an increasingly important area research. At the same time, regularization been recognized as a crucial dimension to effective training DNNs. However, role metalearning in establishing not yet fully explored. There is recent evidence that loss-function optimization could play this role, however it computationally impractical outer loop full training. This paper presents algorithm called Evolutionary...
Deep neural networks (DNNs) have produced state-of-the-art results in many benchmarks and problem domains. However, the success of DNNs depends on proper configuration its architecture hyperparameters. Such a is difficult as result, are often not used to their full potential. In addition, commercial applications need satisfy real-world design constraints such size or number parameters. To make easier, automatic machine learning (AutoML) systems for deep been developed, focusing mostly...
Multitask learning, i.e. learning several tasks at once with the same neural network, can improve performance in each of tasks. Designing deep network architectures for multitask is a challenge: There are many ways to tie together, and design choices matter. The size complexity this problem exceeds human ability, making it compelling domain evolutionary optimization. Using existing state art soft ordering architecture as starting point, methods evolving modules overall topology or routing...
We implement stacked denoising autoencoders, a class of neural networks that are capable learning powerful representations high dimensional data. describe stochastic gradient descent for unsupervised training as well novel genetic algorithm based approach makes use information. analyze the performance both optimization algorithms and also representation ability autoencoder when it is trained on standard image classification datasets.
Metalearning of deep neural network (DNN) architectures and hyperparameters has become an increasingly important area research. At the same time, regularization been recognized as a crucial dimension to effective training DNNs. However, role metalearning in establishing not yet fully explored. There is recent evidence that loss-function optimization could play this role, however it computationally impractical outer loop full training. This paper presents algorithm called Evolutionary...
<sec> <title>BACKGROUND</title> OpenAI’s ChatGPT and other Generative Language Models (GLMs) have rapidly increased in popularity. Such language models the potential to greatly benefit medical field. Despite rapid rise implementation of such technologies, no standardized framework currently exists discuss prompting techniques for these models. </sec> <title>OBJECTIVE</title> We aim establish nomenclature “variable” “clause” a generative model while providing example interviews that outline...
Many evolutionary algorithms (EAs) take advantage of parallel evaluation candidates. However, if times vary significantly, many worker nodes (i.e.,\ compute clients) are idle much the time, waiting for next generation to be created. Evolutionary neural architecture search (ENAS), a class EAs that optimizes and hyperparameters deep networks, is particularly vulnerable this issue. This paper proposes generic asynchronous strategy (AES) then adapted work with ENAS. AES increases throughput by...
Deep learning (DL) has transformed much of AI, and demonstrated how machine can make a difference in the real world. Its core technology is gradient descent, which been used neural networks since 1980s. However, massive expansion available training data compute gave it new instantiation that significantly increased its power.