- Dye analysis and toxicity
- Aerospace and Aviation Technology
- Identification and Quantification in Food
- Air Traffic Management and Optimization
- Biochemical Analysis and Sensing Techniques
- Autonomous Vehicle Technology and Safety
- melanin and skin pigmentation
- Gold and Silver Nanoparticles Synthesis and Applications
- Mercury impact and mitigation studies
- Robotic Path Planning Algorithms
- Human-Automation Interaction and Safety
- Adaptive Control of Nonlinear Systems
Harbin Institute of Technology
2022-2024
Shanghai Ocean University
2015-2018
Synthetic colorants in food can be a potential threat to human health. In this study, surface-enhanced Raman spectroscopy (SERS) coupled with gold nanorods as substrates is proposed analyze allura red and sunset yellow beverages. The different aspect ratios were synthesized, their long-term stability, SERS activity, the effect of salts on signal investigated. results demonstrate that have satisfactory stability (stored up 28 days). exhibit stronger sensitivity. MgSO4 was chosen improve...
Au‐Ag core‐shell (Au@Ag) bimetallic nanospheres synthesized by a facile seed‐growth method are proposed as substrate for surface‐enhanced Raman spectroscopy (SERS) to detect azo‐group dyes including Sudan I and II. Au@Ag with series of particle sizes (diameter: 30–120 nm) silver shell thicknesses (6–51 were compared their morphological optical properties obtain optimum enhancement effect. Normal Raman, SERS, infrared, ultraviolet‐visible used investigate the absorption II well mechanism...
A fixed-wing aircraft can be in the final phase of a potential collision with non-cooperative dynamic obstacle (e.g., drone) because limited sensing range. In collision, performance existing avoidance approaches that do not take into account bounded and non-isotropic maneuver capability aerodynamic characteristics is limited. To enhance this study develops hierarchical Reinforcement Learning (RL)-based strategy. The RL-based strategy learns high-level navigator provides velocity vector to...
Researchers have made many attempts to apply reinforcement learning (RL) learn fly aircraft in recent years. However, existing RL strategies are usually not safe (e.g., can lead crash) the initial stage of training an RL-based policy. For increasingly complex piloting tasks whose representative models hard establish, it is policy by interacting with aircraft. To enhance safety and feasibility applying aircraft, this study develops offline–online strategy. The strategy learns effective...
Obstacle avoidance is a crucial issue to enhance the safety of aircraft. Aircraft usually need avoid drop in altitude and keep set course while avoiding an obstacle. In this paper, hierarchical obstacle strategy proposed address avoidance, altitude, keeping simultaneously. The integrates high-level reinforcement learning-based navigator low-level attitude controller. Experiments are conducted X-Plane, flight simulator, evaluate strategy.