- Advanced Data Storage Technologies
- Cloud Data Security Solutions
- Caching and Content Delivery
- Cloud Computing and Resource Management
- Parallel Computing and Optimization Techniques
- Opportunistic and Delay-Tolerant Networks
- Cryptography and Data Security
- Satellite Communication Systems
- Privacy-Preserving Technologies in Data
- Mobile Agent-Based Network Management
- Distributed and Parallel Computing Systems
- Network Traffic and Congestion Control
- Mobile Ad Hoc Networks
- Network Security and Intrusion Detection
- Advanced Optical Network Technologies
- IoT and Edge/Fog Computing
- Peer-to-Peer Network Technologies
- Advanced Database Systems and Queries
- Internet Traffic Analysis and Secure E-voting
- Distributed systems and fault tolerance
- Technology and Security Systems
- Cryptographic Implementations and Security
- Chaos-based Image/Signal Encryption
- Gene expression and cancer classification
- Software-Defined Networks and 5G
Sun Yat-sen University
2021-2024
PLA Army Engineering University
2015-2021
Peng Cheng Laboratory
2021
National University of Defense Technology
2008-2018
University of Nebraska–Lincoln
2011-2012
The rapid development of proteomics studies has resulted in large volumes experimental data. emergence big data platform provides the opportunity to handle these amounts integrated proteome resource, iProX (https://www.iprox.cn), which was initiated 2017, been greatly improved with an up-to-date implemented 2021. Here, we describe main developments since its first publication Nucleic Acids Research 2019. First, a hyper-converged architecture high scalability supports submission process. A...
This letter proposes a graph-based satellite handover framework for the low earth orbit (LEO) communication networks. In order to maintain connection with communicating counterpart, user has switch between consecutive satellites covering very user. A directed graph period of as its node, and link representing possible two overlapping periods, is calculated in advance, then, process can be viewed finding path graph. By setting weight according different criterions, support variety strategies,...
The market for cloud backup services in the personal computing environment is growing due to large volumes of valuable and corporate data being stored on desktops, laptops smart phones. Source deduplication has become a mainstay that saves network bandwidth reduces storage space. However, there are two challenges facing service clients: (1) low efficiency combination resource-intensive nature limited system resources PC-based client site, (2) transfer since post-deduplication transfers from...
Data deduplication has been demonstrated to be an effective technique in reducing the total data transferred over network and storage space cloud backup, archiving, primary systems, such as VM (virtual machine) platforms. However, performance of restore operations from a deduplicated backup can significantly lower than that without deduplication. The main reason lies fact file or block is split into multiple small chunks are often located different disks after deduplication, which cause...
In personal computing devices that rely on a cloud storage environment for data backup, an imminent challenge facing source deduplication backup services is the low efficiency due to combination of resource-intensive nature and limited system resources. this paper, we present ALG-Dedupe, Application-aware Local-Global scheme improves by exploiting application awareness, further combines local global duplicate detection strike good balance between capacity saving time reduction. We perform...
Deduplication has become a widely deployed technology in cloud data centers to improve IT resources efficiency. However, traditional techniques face great challenge big deduplication strike sensible tradeoff between the conflicting goals of scalable throughput and high duplicate elimination ratio. We propose AppDedupe, an application-aware inline distributed framework environment, meet this by exploiting application awareness, similarity locality optimize with inter-node two-tiered routing...
The past few years have witnessed the prosperity in mobile Internet. increased traffic with diverse QoS requirements calls for a more capable and flexible infrastructure. satellite systems global coverage capability can be used as backbone network to interconnect autonomous worldwide. In this context, routing data networks is of great importance considering integration terrestrial Internet protocol networks. This paper provides systematic design geostationary orbit/low earth orbit (LEO)...
The explosive growth of digital content results in enormous strains on the storage systems cloud environment. data deduplication technology has been demonstrated to be very effective shortening backup window and saving network bandwidth space backup, archiving primary such as VM platforms. However, delay power consumption restore operations from a deduplicated can significantly higher than those without deduplication. main reason lies fact that file or block is split into multiple small...
Existing primary deduplication techniques either use inline caching to exploit locality in workloads or post-processing avoid the negative impact on I/O performance. However, neither of them works well cloud servers running multiple services for following two reasons: First, temporal duplicate data writes varies among storage workloads, which makes it challenging efficiently allocate cache space and achieve a good ratio. Second, does not eliminate operations that write same logical block...
The healthcare industry faces challenges regarding the security and efficiency of data management for patient health records (PHRs). We propose SafePHR: a secure efficient medical service with application aware deduplication in fog-to-multicloud encrypted storage. It introduces cooperative storage model, which combines low-latency high-safety local fog unlimited capacity built-in disaster recovery remote multi-cloud to enhance eHealth management. also provides an scheme improve traffic space...
In large distributed storage systems, metadata is usually managed separately by a server cluster. The partitioning of the among servers critical importance for maintaining efficient MDS operation and desirable load distribution across We present dynamic directory (DDP) management scheme, file are in different ways, dynamically changing workload can be balanced adjusting on servers. Our simulation results show that our approach, comparing with other strategies, has advantages performance,...
This paper proposes and evaluates the agent-based dynamic routing (ADR) in packet-switched low earth orbit (LEO) satellite networks. There are two kinds of agents, roaming agents fixed agents. Roaming walk randomly around network to gather latest link status information traversed satellites at same time transfer past satellites. Meanwhile, located estimate cost based on obtained from take advantage predictable property networks update tables using HALO algorithm already proposed for...
Public cloud storage can provide customers with unlimited capacity and built-in disaster recovery by exploiting IT resources in data center but suffers high latency security risk. The emergency of fog computing mitigate risk leveraging local private server low latency. We introduce F2MC: a Fog-to-MultiCloud hybrid service that combines remote to enhance the quality (QoS) management. It provides an application aware reduction scheme improve traffic space efficiency deduplication compression....
The exponential growth of data has brought a tremendous challenge on the storage system in center. Data deduplication technology which detects and eliminates redundant dataset can greatly reduce quantity optimize utilization space. This paper presented scalable reliable cluster Halodedu over Hadoop-based cloud computing platform. used MapReduce HDFS to realize parallel processing manage storage, respectively. Intra-node local database was build up fast distributed chunk fingerprint index...
Phase Change Memory (PCM) can directly connect persistent memory to main bus, while it achieves high read throughput and low standby power, the critical concerns are its poor write performance limited durability. A naturally in-spired design is hybrid architecture that fuses DRAM PCM, so as exploit positive aspects of both types memory. Unfortunately, existing solutions seriously challenged by size, which primary bottleneck in-memory computing. In this paper, we introduce a novel Content...
Eliminating duplicate data in primary storage of clouds increases the cost-efficiency cloud service providers as well reduces cost users for using services. Existing deduplication techniques either use inline caching to exploit locality workloads or post-processing running system idle time avoid negative impact on I/O performance. However, neither them works servers multiple services applications following two reasons: Firstly, temporal writes may not exist some thus often fails achieve good...
With the development of big data and multi-core processors technology, DRAM-only based main memory cannot satisfy requirements in-memory computing in high capacity low energy consumption. The emerging technology-phase change (PCM) is proposed to break bottleneck current system. However, its weaknesses write endurance long access latency make it fully replace DRAM. Consequently, researchers presented architectural design aimed at DRAM/ PCM hybrids corresponding page migration scheme give full...
With the further research of big data management, plenty components for management have been developed. Based on Hadoop platform, these provide solutions from different levels. The ecosystem has gradually taken its shape. However, users usually lack knowledge about features components, such as I/O pattern, capability, application scenes and so on. When dealing with some problems, are often chosen by user's experience this will definitely lead to mismatch between demands tools. Thus, platform...
Inline cluster deduplication technique has been widely used in data centers to improve storage efficiency. Data routing algorithm a crucial impact on the factor, throughput and scalability system. In this paper, we propose stateful called DS-Dedupe. To make full use of similarity streams, DS-Dedupe builds up super-chunk granularity index each client trace super-chunks that have routed. Then calculate coefficient according determine whether new should be assigned directly or by consistent...
The hop-limited adaptive routing (HLAR) mechanism and its enhancement (EHLAR), both tailored for the packet-switched non-geostationary (NGEO) satellite networks, are proposed evaluated. mechanisms exploit predictable topology inherent multi-path property of NGEO networks to adaptively distribute traffic via all feasible neighboring satellites. Specifically, assume that a can send packets their destinations any satellites, thus link destination is assigned probability proportional effective...