- Advanced Data Storage Technologies
- Distributed and Parallel Computing Systems
- Cloud Computing and Resource Management
- Caching and Content Delivery
- Parallel Computing and Optimization Techniques
- Distributed systems and fault tolerance
- IoT and Edge/Fog Computing
- Advanced Computational Techniques and Applications
- Photochemistry and Electron Transfer Studies
- Data Stream Mining Techniques
- Software System Performance and Reliability
- Advanced Image and Video Retrieval Techniques
- Supramolecular Self-Assembly in Materials
- Scientific Computing and Data Management
- Artificial Intelligence in Healthcare
- Genomic variations and chromosomal abnormalities
- Safety and Risk Management
- Cloud Data Security Solutions
- GNSS positioning and interference
- Genomics and Phylogenetic Studies
- Software Reliability and Analysis Research
- Power Systems and Renewable Energy
- RNA and protein synthesis mechanisms
- Neural Networks and Applications
- Ferroelectric and Negative Capacitance Devices
Hefei National Center for Physical Sciences at Nanoscale
2012-2024
University of Science and Technology of China
2009-2024
National University of Defense Technology
2010-2020
National Supercomputing Center of Tianjin
2012-2018
North Minzu University
2014
There is a rapidly increasing amount of de novo genome assembly using next-generation sequencing (NGS) short reads; however, several big challenges remain to be overcome in order for this efficient and accurate. SOAPdenovo has been successfully applied assemble many published genomes, but it still needs improvement continuity, accuracy coverage, especially repeat regions. To these challenges, we have developed its successor, SOAPdenovo2, which the advantage new algorithm design that reduces...
When users flood in cloud data centers, how to efficiently manage hardware resources and virtual machines (VMs) a center both lower economical cost ensure high service quality becomes an inevitable work for providers. VM migration is cornerstone technology the majority of management tasks. It frees from underlying hardware. This feature brings plenty benefits providers users. Many researchers are focusing on pushing its cutting edge. In this paper, we first give overview discuss challenges....
Abstract The directional and dynamic hydrogen bonds are of vital importance for both nucleic acids proteins, but they naturally apply strong multiple in pendant groups weak single bond the backbone. hierarchy orthogonality biological systems inspire to elegantly tailor supramolecular polymeric materials robust mechanical properties. Herein, this work has fabricated ultrastrong tough through bioinspired rational design Based on quadruple ureidopyrimidinone amide, polymer with optimized...
Cloud providers should ensure QoS while maximizing resources utilization. One optimal strategy is to timely allocate in a fine-grained mode according application’s actual demand. The necessary precondition of this obtaining future load information advance. We propose multi-step-ahead forecasting method, KSwSVR, based on statistical learning theory which suitable for the complex and dynamic characteristics cloud computing environment. It integrates an improved support vector regression...
Cloud provider should ensure QoS while maximizing resources utilization. One optimal strategy is to timely allocate in a fine-grained mode according the actual demand of applications. The necessary precondition this obtaining future load information advance. We propose multi-step-ahead forecasting method, KSwSVR, based on statistical learning theory which suitable for complex and dynamic characteristics cloud computing environment. It integrates an improved support vector regression...
Summary With the popularity of mobile devices (such as smartphones and tablets) development Internet Things, edge computing is envisioned a promising approach to improving computation capabilities energy efficiencies devices. It deploys cloud data centers at network lower service latency. To satisfy high latency requirement applications, virtual machines (VMs) have be correspondingly migrated between because user mobility. In this paper, we try minimize overhead resulting from constantly...
Summary I/O forwarding layer has now become a standard storage in today's HPC systems order to scale current new levels of concurrency. With the deepening hierarchy, requests must traverse through several types nodes access required data, including compute nodes, and nodes. It becomes difficult control data path apply cross‐layer optimization. In this paper, we propose well coordinated stack, which coordinates between for better load balancing locality with job‐level node mapping, lighter...
The performance gap between computation and storage in HPC systems is enlarged with the rapid increase of compute scale. Client-side file caching has been proved to be an effective technique improve I/O parallel system adopted by systems. Current cache frameworks mainly focus on local thread access frequency, which suffers from inefficiency problems for lack level information. This paper presents SFDC, a framework that aims through providing high efficiency using knowledge. SFDC collects...
As the computing capability of top HPC systems increases towards exascale, I/O traffic from tremendous amount compute nodes have stretched underlying storage system to its limit. forwarding was proposed address such problem by reducing number clients a much smaller nodes. In this paper, we study load imbalance layer, and find that bursty applications commonly existing rank 0 pattern make on highly unbalanced. The application performance is limited if some become hot spots while others little...
Job scheduling in HPC systems by default allocate adjacent compute nodes for jobs lower communication overhead. However, it is no longer applicable to data-intensive running on with I/O forwarding layer, where each node performs behalf of a subset the vicinity. Under allocation strategy job's are located close other and thus only uses limited number nodes. Since activities bursty, at any moment minority system busy processing I/O. Consequently, bursty traffic also concentrated space, making...
The performance of storage subsystem super-computers can not meet the demands complex applications running on them. One its major causes is that bandwidth hardware has been utilized efficiently due to and changing application I/O behavior. Therefore, characterization tools are vital development orchestration system. This paper proposes an tool called FTracer. It captures traces performs analysis at runtime. In order provide more flexible analysis, this FTracer allows users vary instances...
A new MDS array erasure code, called DA-Code, which can tolerate double disk erasures for highly reliable data storage system is proposed in this paper. The DA-Code requires only XOR operations and achieves optimal encoding, updating decoding complexity. parity symbols are evenly distributed the array, overcoming bottleneck effects of repeated write operation. Detailed DA-Code's algorithm correcting failures provided. Analysis result shows that coding scheme has excellent performance. Thus,...
With the amount of data growing constantly and exponentially, industry has encountered an unprecedented challenge efficiently reliably processing a tremendous data. High performance computer played major role in field big for its serious computational power super-large storage. However, it remains some inevitable drawbacks to utilize HPC due relatively lower availability usability. We propose implement MapReduce framework on solve above problems extensively expand application HPC. design...
Abstract Elastomers have been widely employed in various industrial products such as tires, actuators, dampers, and sealants. While methods developed to strengthen elastomers, achieving continuously high energy dissipation with fast room‐temperature recovery remains challenging, prompting the need for further structural optimization. Herein, dissipated recoverable double‐network (DN) elastomers are fabricated, which supramolecular polymers of complementary adenine thymine serve first network...
For the challenges of redundancy, multi-dimension, complex and heterogeneous in medical documents ,and to solve problem that value hidden huge amounts document-data can't be mined, this paper proposed a system called MSPM based on NOSQL MapReduce. Through storage key-value pairs,complex datas are summed up unified convenient format transaction for Apriori. Then Apriori is executed parallel through MapReduce.At last,with strategies generating all candidate sets non-recursively constraint...
Cloud computing has started to change the way how bioinformatics research is being carried out. Researchers who have taken advantage of this technology can process larger amounts data and speed up scientific discovery. The variability in volume results variable requirements. Therefore, researchers are pursuing more reliable efficient methods for conducting sequencing analyses. This paper proposes an automated resource provisioning method, G2LC, applications IaaS. It enables application...
In recent years, many efforts have been done in the quest of digital city construction. this paper, we attempt to realize our thought a presentation system based on some China. system, where two-dimensional topic map services and three-dimensional scenes are provided through WMS (web service) with help City Maker ArcGIS suite, several real management systems which serve as part e-government, integrated so precise delicate models well special functions, making work different from previous...
Commodity graphic processing units (GPUs) have rapidly evolved to become high performance accelerators for data-parallel computing through a large array of cores and the CUDA programming model with C-like interface. However, optimizing an application maximum based on GPU architecture is not trivial task tremendous change from conventional multi-core many-core architectures. Besides, vendors do disclose much detail about characteristics GPU's architecture. To provide insights into...