- Cloud Computing and Resource Management
- IoT and Edge/Fog Computing
- Distributed and Parallel Computing Systems
- 3D Surveying and Cultural Heritage
- 3D Shape Modeling and Analysis
- Energy Efficient Wireless Sensor Networks
- Software-Defined Networks and 5G
- Software System Performance and Reliability
- Speech Recognition and Synthesis
- Caching and Content Delivery
- Remote Sensing and LiDAR Applications
- Advanced Multi-Objective Optimization Algorithms
- Indoor and Outdoor Localization Technologies
- Misinformation and Its Impacts
- Spam and Phishing Detection
- Maritime Navigation and Safety
- UAV Applications and Optimization
- Artificial Intelligence in Games
- Green IT and Sustainability
- Sports Analytics and Performance
- Age of Information Optimization
- Distributed Control Multi-Agent Systems
- Data Management and Algorithms
- Parallel Computing and Optimization Techniques
- Satellite Communication Systems
Tianjin University
2024-2025
Northwest University
2024
Tianjin University of Technology
2023-2024
Princeton University
2024
University of Virginia
2017-2023
University of Science and Technology Beijing
2019-2023
Guilin University of Technology
2023
Shenyang Aerospace University
2021
Yunnan University
2021
Purdue University West Lafayette
2021
A large-scale cloud data center needs to provide high service reliability and availability with low failure occurrence probability. However, current centers still face rates due many reasons such as hardware software failures, which often result in task job failures. Such failures can severely reduce the of services also occupy huge amount resources recover from Therefore, it is important predict or before accuracy avoid unexpected wastage. Many machine learning deep based methods have been...
Fog computing makes up for the shortcomings of cloud computing. It brings many advantages, but various peculiarities must be perceived, such as security, resource management, storage, and other features at same time. This paper investigates contribution model between fog node or users when introduces blockchain. The proposed practices reward punishment mechanism blockchain to boost nodes contribute resources actively. behavior in contributing completion degree task also are packaged into...
A large-scale cloud data center needs to provide high service reliability and availability with low failure occurrence probability. However, current centers still face rates due many reasons such as hardware software failures, which often result in task job failures. Such failures can severely reduce the of services also occupy huge amount resources recover from Therefore, it is important predict or before accuracy avoid unexpected wastage. Many machine learning deep based methods have been...
The San Andreas Fault system, known for its frequent seismic activity, provides an extensive dataset earthquake studies. region's well-instrumented networks have been crucial in advancing research on statistics, physics, and subsurface Earth structures. In recent years, data from California has become increasingly valuable deep learning applications, such as Generalized Phase Detection (GPD) phase detection polarity determination, PhaseNet arrival-time picking. continuous accumulation of...
In a modern cloud datacenter, cascading failure will cause many Service Level Objective (SLO) violations. failure, when set of physical machines (PMs) in domain are failed, their workloads transferred to the PMs another continue. However, new receiving additional may become overloaded due resource oversubscription feature cloud, which easily leads failures and subsequent workload transfer other domains. This process repeats is created finally. few previous methods can effectively handle...
With the rapid development of web applications in datacenters, network latency becomes more important to user experience. The will be greatly increased by incast congestion, which a huge number requests arrive at front-end server simultaneously. Previous problem solutions usually handle data transmission between servers and directly, they are not sufficiently effective proactively avoiding congestion. To further improve effectiveness, this paper, we propose Proactive Incast Congestion...
Renewable energy supply is a promising solution for datacenters' increasing electricity monetary cost, consumption and harmful gas emissions. However, due to the instability of renewable energy, insufficient may lead use stored or brown energy. To handle this problem, in paper, we propose an instability-resilient allocation system. We define job's service-level-objective (SLO) as successful running probability by only using supplied The system allocates jobs with same SLO level physical...
With the rapid proliferation of Machine Learning (ML) and Deep learning (DL) applications running on modern platforms, it is crucial to satisfy application performance requirements such as meeting deadline ensuring accuracy. To this end, researchers have proposed several job schedulers for ML clusters. However, none previously consider model parallelism, though has been an approach increase efficiency large-scale DL jobs. Thus, in paper, we propose Feature based Scheduling system (MLFS)...
Fake news travels at unprecedented speeds, reaches global audiences and puts users communities great risk via social media platforms. Deep learning based models show good performance when trained on large amounts of labeled data events interest, whereas the tends to degrade other due domain shift. Therefore, significant challenges are posed for existing detection approaches detect fake emergent events, where large-scale datasets difficult obtain. Moreover, adding knowledge from newly...
Potential inconsistencies between the goals of unsupervised representation learning and clustering within multi-stage deep can diminish effectiveness these techniques. However, because goal is inherently flexible be tailored to clustering, we introduce PointStaClu, a novel single-stage point cloud method. This method employs stable cluster discrimination (StaClu) tackle inherent instability present in training. It achieves this by constraining gradient descent updates for negative instances...
In Web applications served by datacenter nowadays, the incast congestion at front-end server seriously degrades data request latency performance due to vast transmissions from a large number servers for in short time. Previous control methods usually consider direct server, which makes it difficult sending speed or adjust workloads transient transmission of only few objects each server. this paper, we propose Swarm-based Incast Congestion Control (SICC) system. SICC forms all target one same...
The performance of web browsers has become a major bottleneck when dealing with complex webpages. Many calculation redundancies exist processing similar webpages, thus it is possible to cache and reuse previously calculated intermediate results improve browser significantly. In this paper, we propose similarity-based optimization approach webpage browsers. Through caching reusing style properties previously, are able eliminate the caused by webpages from same website. We tree-structured...
Achieving carbon neutrality within industrial operations has become increasingly imperative for sustainable development. It is both a significant challenge and key opportunity operational optimization in industry 4.0. In recent years, Deep Reinforcement Learning (DRL) based methods offer promising enhancements sequential processes can be used reducing emissions. However, existing DRL need pre-defined reward function to assess the impact of each action on final development goals (SDG). many...
Self-supervised learning has made significant progress in point cloud processing. Currently, the primary tasks of self-supervised learning, which include reconstruction and representation are trained separately due to their structural differences. This separation inevitably leads increased training costs neglects potential for mutual assistance between tasks. In this paper, a method named PointUR-RL is introduced, integrates learning. The features two key components: variable masked...
The heterogeneous information network (HIN), which contains rich semantics depicted by meta-paths, has emerged as a potent tool for mitigating data sparsity in recommender systems. Existing HIN-based systems operate under the assumption of centralized storage and model training. However, real-world is often distributed due to privacy concerns, leading semantic broken issue within HINs consequent failures recommendations. In this paper, we suggest HIN partitioned into private stored on client...
In data-intensive parallel computing clusters, it is important to provide deadline-guaranteed service jobs while minimizing resource usage (e.g., network bandwidth and energy). Under the current framework (that first allocates data then schedules jobs), in a busy cluster with many jobs, difficult achieve high locality (hence low consumption), deadline guarantee, energy savings simultaneously. We model problem simultaneously these three objectives using integer programming. Due NP-hardness of...
More and more organizations move their data workload to commercial cloud storage systems. However, the multiplexing sharing of resources in a system present unpredictable access latency tenants, which may make online data-intensive applications unable satisfy deadline requirements. Thus, it is important for systems provide guaranteed services. In this paper, meet current form service level objective (SLO) that constrains percentage each tenant's requests failing its required below given...
High-precision positioning is the basis of intelligent vehicle-road coordination. The accuracy traditional system insufficient for vehicle in scenarios such as interaction. Differential GPS locator and inertial navigation significantly improve accuracy, but cost much higher. In this paper, an aided method using machine vision to identify road markers through on-board camera proposed, which can satellite by terminals obtain vector changes landscape vehicles. Firstly, based on longitudinal...
As the nodes of AWSN (Aerial Wireless Sensor Networks) fly around, network topology changes frequently with high energy consumption and cluster head mortality, some sensor may away from original interrupt communication. To ensure normal communication network, this paper proposes an improved LEACH-M protocol for aerial wireless networks. The is based on traditional MCR protocol. A Cluster selection method maximum efficient solution outlier proposed to that heads can be replaced prior their...