- Radiomics and Machine Learning in Medical Imaging
- Privacy-Preserving Technologies in Data
- Artificial Intelligence in Healthcare and Education
- AI in cancer detection
- COVID-19 diagnosis using AI
- Genomics and Phylogenetic Studies
- RNA and protein synthesis mechanisms
- Genomics and Chromatin Dynamics
- Pulmonary Hypertension Research and Treatments
- Medical Imaging and Analysis
- Cryptography and Data Security
- Cutaneous Melanoma Detection and Management
- Genital Health and Disease
- Cancer Genomics and Diagnostics
- Lung Cancer Diagnosis and Treatment
- Hemodynamic Monitoring and Therapy
- Action Observation and Synchronization
- Fractal and DNA sequence analysis
- Artificial Intelligence in Healthcare
- Plant Disease Resistance and Genetics
- Advanced Image Processing Techniques
- Plant-Microbe Interactions and Immunity
- Venous Thromboembolism Diagnosis and Management
- Plant nutrient uptake and metabolism
- Cell Image Analysis Techniques
King Abdullah University of Science and Technology
2020-2025
University of Science and Technology
2021
Korea University of Science and Technology
2021
Southern University of Science and Technology
2021
Bioscience Research
2021
Institute of Vegetables and Flowers
2013
Chinese Academy of Agricultural Sciences
2013
COVID-19 has caused a global pandemic and become the most urgent threat to entire world. Tremendous efforts resources have been invested in developing diagnosis, prognosis treatment strategies combat disease. Although nucleic acid detection mainly used as gold standard confirm this RNA virus-based disease, it shown that such strategy high false negative rate, especially for patients early stage, thus CT imaging applied major diagnostic modality confirming positive COVID-19. Despite various,...
Abstract Large language models (LLMs) are seen to have tremendous potential in advancing medical diagnosis recently, particularly dermatological diagnosis, which is a very important task as skin and subcutaneous diseases rank high among the leading contributors global burden of nonfatal diseases. Here we present SkinGPT-4, an interactive dermatology diagnostic system based on multimodal large models. We aligned pre-trained vision transformer with LLM named Llama-2-13b-chat by collecting...
Modern machine learning models toward various tasks with omic data analysis give rise to threats of privacy leakage patients involved in those datasets. Here, we proposed a secure and privacy-preserving method (PPML-Omics) by designing decentralized differential private federated algorithm. We applied PPML-Omics analyze from three sequencing technologies addressed the concern major under representative deep models. examined breaches depth through attack experiments demonstrated that could...
Abstract Tremendous efforts have been made to improve diagnosis and treatment of COVID-19, but knowledge on long-term complications is limited. In particular, a large portion survivors has respiratory complications, currently, experienced radiologists state-of-the-art artificial intelligence systems are not able detect many abnormalities from follow-up computerized tomography (CT) scans COVID-19 survivors. Here we propose Deep-LungParenchyma-Enhancing (DLPE), computer-aided detection (CAD)...
Abstract Large language models (LLMs) are seen to have tremendous potential in advancing medical diagnosis recently. However, it is important note that most current LLMs limited text interaction alone. Meanwhile, the development of multimodal large for still its early stages, particularly considering prevalence image-based data field diagnosis, among which dermatological a very task as skin and subcutaneous diseases rank high leading contributors global burden nonfatal diseases. Inspired by...
Pulmonary artery-vein segmentation is critical for disease diagnosis and surgical planning. Traditional methods rely on Computed Tomography Angiography (CTPA), which requires contrast agents with potential health risks. Non-contrast CT, a safer more widely available approach, however, has long been considered impossible this task. Here we propose High-abundant Artery-vein Segmentation (HiPaS), enabling accurate across both non-contrast CT CTPA at multiple resolutions. HiPaS integrates...
Foundation models have attracted significant attention for their impressive generalizability across diverse downstream tasks. However, they are demonstrated to exhibit great limitations in representing high-frequency components and fine-grained details. In many medical imaging tasks, precise representation of such information is crucial due the inherently intricate anatomical structures, sub-visual features, complex boundaries involved. Consequently, limited prevalent foundation can result...
Abstract Revoking personal private data is one of the basic human rights. However, such right often overlooked or infringed upon due to increasing collection and use patient for model training. In order secure patients’ be forgotten, we proposed a solution by using auditing guide forgetting process, where means determining whether dataset has been used train requires information query forgotten from target model. We unified these two tasks introducing an approach called knowledge...
Abstract Modern machine learning models towards various tasks with omic data analysis give rise to threats of privacy leakage patients involved in those datasets. Despite the advances different technologies, existing methods tend introduce too much computational cost (e.g. cryptographic methods) or noise differential privacy), which hampers either model usefulness accuracy protecting biological data. Here, we proposed a secure and privacy-preserving method (PPML-Omics) by designing...
Abstract Revoking personal private data is one of the basic human rights, which has already been sheltered by several privacy-preserving laws in many countries. However, with development science, machine learning and deep techniques, this right usually neglected or violated as more patients’ are being collected used for model training, especially intelligent healthcare, thus making healthcare a sector where technology must meet law, regulations, privacy principles to ensure that innovation...
Natural language processing (NLP) is central to the communication with machines and among ourselves, NLP research field has long sought produce human-quality language. Identification of informative criteria for measuring NLP-produced quality will support development ever-better tools. The authors hypothesize that mentalizing network neural activity may be used distinguish from human-produced language, even cases where human judges cannot subjectively source. Using social chatbots Google...
Abstract Heterogeneous data is endemic due to the use of diverse models and settings devices by hospitals in field medical imaging. However, there are few open-source frameworks for federated heterogeneous image analysis with personalization privacy protection simultaneously without demand modify existing model structures or share any private data. In this paper, we proposed PPPML-HMI, an learning paradigm personalized privacy-preserving analysis. To our best knowledge, were achieved first...
Pulmonary artery-vein segmentation is crucial for diagnosing pulmonary diseases and surgical planning, traditionally achieved by Computed Tomography Angiography (CTPA). However, concerns regarding adverse health effects from contrast agents used in CTPA have constrained its clinical utility. In contrast, identifying arteries veins using non-contrast CT, a conventional low-cost examination routine, has long been considered impossible. Here we propose High-abundant Artery-vein Segmentation...
Abstract The accurate annotation of transcription start sites (TSSs) and their usage are critical for the mechanistic understanding gene regulation in different biological contexts. To fulfill this, specific high-throughput experimental technologies have been developed to capture TSSs a genome-wide manner, various computational tools also silico prediction solely based on genomic sequences. Most these cast problem as binary classification task balanced dataset, thus resulting drastic false...
Natural Language Processing Chatbots may be able to talk like humans, but they cannot fool our brains. Subjectively indistinguishable chatbot and human language elicits distinct patterns of brain activity in the mentalizing network regions. A promising blueprint emerges by adding neural data as an extra index assessment criteria for standard Turing tests. More details can found article number 2203990 Xiaochu Zhang co-workers.
Abstract The accurate annotation of transcription start sites (TSSs) and their usage is critical for the mechanistic understanding gene regulation under different biological contexts. To fulfill this, on one hand, specific high-throughput experimental technologies have been developed to capture TSSs in a genome-wide manner. On other various computational tools also silico prediction solely based genomic sequences. Most these cast problem as binary classification task balanced dataset thus...
Revoking personal private data is one of the basic human rights, which has already been sheltered by several privacy-preserving laws in many countries. However, with development science, machine learning and deep techniques, this right usually neglected or violated as more patients' are being collected used for model training, especially intelligent healthcare, thus making healthcare a sector where technology must meet law, regulations, privacy principles to ensure that innovation common...
Heterogeneous data is endemic due to the use of diverse models and settings devices by hospitals in field medical imaging. However, there are few open-source frameworks for federated heterogeneous image analysis with personalization privacy protection simultaneously without demand modify existing model structures or share any private data. In this paper, we proposed PPPML-HMI, an learning paradigm personalized privacy-preserving analysis. To our best knowledge, were achieved first time under...
Abstract The accurate annotation of transcription start sites (TSSs) and their usage is critical for the mechanistic understanding gene regulation under different biological contexts. To fulfil this, on one hand, specific high-throughput experimental technologies have been developed to capture TSSs in a genome-wide manner. On other various computational tools also silico prediction solely based genomic sequences. Most these cast problem as binary classification task balanced dataset thus...