- Robotic Path Planning Algorithms
- Robotics and Sensor-Based Localization
- Neural Networks Stability and Synchronization
- Robotics and Automated Systems
- Stability and Control of Uncertain Systems
- Software Reliability and Analysis Research
- Integrated Circuits and Semiconductor Failure Analysis
- Autonomous Vehicle Technology and Safety
- Infrared Thermography in Medicine
- Topic Modeling
- Advanced Malware Detection Techniques
- Thermoregulation and physiological responses
- Visual Attention and Saliency Detection
- Service-Oriented Architecture and Web Services
- Physical Unclonable Functions (PUFs) and Hardware Security
- Distributed Control Multi-Agent Systems
- Adversarial Robustness in Machine Learning
- CCD and CMOS Imaging Sensors
- Modular Robots and Swarm Intelligence
- Advanced Optical Sensing Technologies
- Neuroscience and Music Perception
- Tensor decomposition and applications
- Human Pose and Action Recognition
- Grey System Theory Applications
- Sentiment Analysis and Opinion Mining
Shunde Polytechnic
2020-2024
University of Rochester
2019-2024
Institute of Computing Technology
2024
Chinese Academy of Sciences
2024
Guangdong Technology College
2020-2024
Rochester Institute of Technology
2022
Georgia Institute of Technology
2022
Donghua University
2010-2020
Ocean University of China
2017
University of California, Davis
2010-2012
Modern speech enhancement algorithms achieve remarkable noise suppression by means of large recurrent neural networks (RNNs).However, RNNs limit practical deployment in hearing aid hardware (HW) form-factors, which are battery powered and run on resource-constrained microcontroller units (MCUs) with limited memory capacity compute capability.In this work, we use model compression techniques to bridge gap.We define the constraints imposed RNN HW describe a method satisfy them.Although an...
3D perception in point clouds is transforming the ability of future intelligent machines. Point cloud algorithms, however, are plagued by irregular memory accesses, leading to massive inefficiencies sub-system, which bottlenecks overall efficiency.
Despite extensive efforts, existing approaches to design accelerators for optimization-based robotic applications have limitations. Some focus on accelerating general matrix operations, but they fail fully exploit the specific sparse structure commonly found in many algorithms. On other hand, certain methods require manual of dedicated accelerators, resulting inefficiencies and significant non-recurring engineering (NRE) costs.
Deep learning is vulnerable to adversarial attacks, where carefully-crafted input perturbations could mislead a well-trained Neural Network (DNN) produce incorrect results. Adversarial attacks jeopardize the safety, security, and privacy of DNN-enabled systems. Today's countermeasures either do not have capability detect samples at inference-time, or introduce prohibitively high overhead be practical inference-time.We propose Ptolemy, an algorithm-architecture co-designed system that detects...
Despite many recent efforts, accelerating robotic computing is still fundamentally challenging for two reasons. First, robotics software stack extremely complicated. Manually designing an accelerator while meeting the latency, power, and resource specifications unscalable. Second, environment in which autonomous machine operates constantly changes; a static design leads to wasteful computation.
We develop and commercialize autonomous machines, such as logistic robots self-driving cars, around the globe. A critical challenge to our—and any—autonomous machine is accurate efficient localization under resource constraints, which has fueled specialized accelerators recently. Prior acceleration efforts are point solutions in that they each specialize for a specific algorithm. In real-world commercial deployments, however, machines routinely operate different environments no single...
Dek: Moving toward reliable autonomous machines.
Embodied AI agents responsible for executing interconnected, long-sequence household tasks often face difficulties with in-context memory, leading to inefficiencies and errors in task execution. To address this issue, we introduce KARMA, an innovative memory system that integrates long-term short-term modules, enhancing large language models (LLMs) planning embodied through memory-augmented prompting. KARMA distinguishes between capturing comprehensive 3D scene graphs as representations of...
We develop and commercialize autonomous machines, such as logistic robots self-driving cars, around the globe. A critical challenge to our -- any machine is accurate efficient localization under resource constraints, which has fueled specialized accelerators recently. Prior acceleration efforts are point solutions in that they each specialize for a specific algorithm. In real-world commercial deployments, however, machines routinely operate different environments no single algorithm fits all...
Autonomous machines, such as Vehicles (AV), are vulnerable to a variety of different faults radiation-induced soft/transient errors, adversarial attacks, and software bugs, which all jeopardize the reliability autonomous machines. How AV stack is error sources, however, remains an open question. This paper performs comprehensively fault injections study how behaves under sources. We show that algorithms in inherently possess forms masking mechanisms. Based on characteristic inherent...
Accurate and efficient localization of robots under limited on-board resources has fueled specialized accelerators. Despite many recent efforts, accelerating robotic is still fundamentally challenging. To tackle the challenges, paper proposes a configurable hardware architecture design space optimization method to automatically generate an optimal accelerator constraints. Data locality, sparsity, fixed-point arithmetic techniques that are specific algorithm exploited customize accelerator....
Factor graph is a representing the factorization of probability distribution function and serves as perfect abstraction in many autonomous machine computing stacks, such planning, localization, tracking control, which are challenging tasks for systems with real-time energy constraints.In this paper, we present BLITZCRANK, an accelerator motion planning algorithms using factor graph. By formulating inference, successfully reduce scale problem utilize inherent matrix sparsity. BLITZCRANK able...
Continuous vision is the cornerstone of a diverse range intelligent applications found on emerging computing platforms such as autonomous machines and Augmented Reality glasses. A critical issue in today's continuous systems their long end-to-end frame latency, which significantly impacts system agility user experience. We find that latency fundamentally caused by serialized execution model pipeline, whose key stages, including sensing, imaging, computations, execute sequentially, leading to latency.
Building open agents has always been the ultimate goal in AI research, and creative are more enticing. Existing LLM excel at long-horizon tasks with well-defined goals (e.g., `mine diamonds' Minecraft). However, they encounter difficulties on abstract criteria due to inability bridge gap between them, thus lacking feedback for self-improvement solving task. In this work, we introduce autonomous embodied verification techniques fill gap, laying groundwork tasks. Specifically, propose Luban...
Performing complex tasks in open environments remains challenging for robots, even when using large language models (LLMs) as the core planner. Many LLM-based planners are inefficient due to their number of parameters and prone inaccuracies because they operate open-loop systems. We think reason is that only applying LLMs insufficient. In this work, we propose DaDu-E, a robust closed-loop planning framework embodied AI robots. Specifically, DaDu-E equipped with relatively lightweight LLM,...
The next ubiquitous computing platform, following personal computers and smartphones, is poised to be inherently autonomous, encompassing technologies like drones, robots, self-driving cars. Ensuring reliability for these autonomous machines critical. However, current resiliency solutions make fundamental trade-offs between cost, resulting in significant overhead performance, energy consumption, chip area. This due the "one-size-fits-all" approach commonly used, where same protection scheme...
The accuracy and applicability of most traditional software defect prediction is not very high. This paper puts forward a model based on grey relational analysis (GRA) support vector machine (SVM) to solve the problem. In this paper, firstly, use reduce dimensions metrics, extracted data which was irrelevant with in order improve speed operation. Then, established by using SVM, 10-fold CV adopted validate model.
This paper discusses the problem of exponential stability for Markovian neutral stochastic systems with general transition probabilities and time-varying delay. Based on non-convolution type multiple Lyapunov functions analysis method, we obtain conditions which are independent to any decay rate uncertain Finally, two examples presented illustrate effectiveness potential proposed results.
Deep learning is vulnerable to adversarial attacks, where carefully-crafted input perturbations could mislead a well-trained Neural Network produce incorrect results. Today's countermeasures attacks either do not have capability detect samples at inference time, or introduce prohibitively high overhead be practical time. We propose Ptolemy, an algorithm-architecture co-designed system that detects time with low and accuracy.We exploit the synergies between DNN imperative program execution:...