- Bioinformatics and Genomic Networks
- Geochemistry and Geologic Mapping
- Gene expression and cancer classification
- Metabolomics and Mass Spectrometry Studies
- Gut microbiota and health
- Cancer-related molecular mechanisms research
- RNA modifications and cancer
- Hydrocarbon exploration and reservoir analysis
- Molecular Biology Techniques and Applications
- Autism Spectrum Disorder Research
- Artificial Intelligence in Healthcare and Education
- Machine Learning in Healthcare
- Single-cell and spatial transcriptomics
- Explainable Artificial Intelligence (XAI)
- Genomics and Phylogenetic Studies
- Genetics and Neurodevelopmental Disorders
- Computational Drug Discovery Methods
- Biomedical Text Mining and Ontologies
- Mental Health Research Topics
- Machine Learning and Data Classification
- Ethics in Clinical Research
- CRISPR and Genetic Engineering
- MicroRNA in disease regulation
- Gene Regulatory Network Analysis
- Oral microbiology and periodontitis research
Deakin University
2016-2023
Australian Research Council
2023
The University of Melbourne
2023
University of Washington
2022
Archarithms (United States)
2020
Allen Institute for Artificial Intelligence
2019
SUNY Upstate Medical University
2014-2016
Marist College
2015
SUNY New Paltz
2011
University of Missouri
1994
Abstract Summary The development of new drugs is costly, time consuming and often accompanied with safety issues. Drug repurposing can avoid the expensive lengthy process drug by finding uses for already approved drugs. In order to repurpose effectively, it useful know which proteins are targeted Computational models that estimate interaction strength drug–target pairs have potential expedite repurposing. Several been proposed this task. However, these represent as strings, not a natural way...
Next-generation sequencing (NGS) has made it possible to determine the sequence and relative abundance of all nucleotides in a biological or environmental sample. A cornerstone NGS is quantification RNA DNA presence as counts. However, these counts are not per se: their magnitude determined arbitrarily by depth, input material. Consequently, must undergo normalization prior use. Conventional methods require set assumptions: they assume that majority features unchanged environments under...
In the life sciences, many assays measure only relative abundances of components in each sample. Such data, called compositional require special treatment to avoid misleading conclusions. Awareness need for caution analyzing data is growing, including understanding that correlation not appropriate data. Recently, researchers have proposed proportionality as a valid alternative calculating pairwise association Although question how best remains open, we present here computationally efficient...
Artificial intelligence (AI) is increasingly of tremendous interest in the medical field. How-ever, failures AI could have serious consequences for both clinical outcomes and patient experience. These erode public trust AI, which turn undermine our healthcare institutions. This article makes 2 contributions. First, it describes major conceptual, technical, humanistic challenges AI. Second, proposes a solution that hinges on education accreditation new expert groups who specialize...
The development of John Aitchison's approach to compositional data analysis is followed since his paper read the Royal Statistical Society in 1982. logratio approach, which was proposed solve problematic aspects working with a fixed-sum constraint, summarized and reappraised. It maintained that properties on this originally built, main one being subcompositional coherence, are not required be satisfied exactly—quasi-coherence sufficient, near enough coherent for all practical purposes. This...
Count data generated by next-generation sequencing assays do not measure absolute transcript abundances. Instead, the are constrained to an arbitrary “library size” depth of assay, and typically must be normalized prior statistical analysis. The nature these means one could alternatively use a log-ratio transformation in lieu normalization, as often done when testing for differential abundance (DA) operational taxonomic units (OTUs) 16S rRNA data. Therefore, we benchmark how well ALDEx2...
Abstract Background Technological advances in next-generation sequencing (NGS) and chromatographic assays [e.g., liquid chromatography mass spectrometry (LC-MS)] have made it possible to identify thousands of microbe metabolite species, measure their relative abundance. In this paper, we propose a sparse neural encoder-decoder network predict abundances from abundances. Results Using paired data cohort inflammatory bowel disease (IBD) patients, show that our model outperforms linear...
A series of new chiral macrocycles containing the trans-1,2-diaminocyclohexane (DACH) subunit and arene- oligoethylene glycol-derived spacers has been prepared in enantiomerically pure form. Four have characterized by X-ray crystallography, which reveals a consistent mode intramolecular N–H···N hydrogen bonding conformational variations about N-benzylic bonds. Most were found to differentiate enantiomers mandelic acid (MA) 1H NMR spectroscopy CDCl3; within tested, enantiodiscrimination was...
Abstract The development of new drugs is costly, time consuming, and often accompanied with safety issues. Drug repurposing can avoid the expensive lengthy process drug by finding uses for already approved drugs. In order to repurpose effectively, it useful know which proteins are targeted Computational models that estimate interaction strength drug--target pairs have potential expedite repurposing. Several been proposed this task. However, these represent as strings, not a natural way...
The automatic discovery of sparse biomarkers that are associated with an outcome interest is a central goal bioinformatics. In the context high-throughput sequencing (HTS) data, and compositional data (CoDa) more generally, important class log-ratios between input variables. However, identifying predictive log-ratio from HTS combinatorial optimization problem, which computationally challenging. Existing methods slow to run scale poorly dimension input, has limited their application low-...
Abstract This paper provides the first comprehensive analysis of ethical issues raised by artificial intelligence (AI) in veterinary medicine for companion animals. Veterinary is a socially valued service, which, like human medicine, will likely be significantly affected AI. AI raises some unique because nature client–patient–practitioner relationship, society’s relatively minimal valuation and protection nonhuman animals differences opinion about responsibilities to animal patients clients....
Introduction Meta-analytical evidence confirms a range of interventions, including mindfulness, physical activity and sleep hygiene, can reduce psychological distress in university students. However, it is unclear which intervention most effective. Artificial intelligence (AI)-driven adaptive trials may be an efficient method to determine what works best for whom. The primary purpose the study rank effectiveness activity, hygiene active control on reducing distress, using multiarm contextual...
Background Preclinical studies have shown that maternal gut microbiota during pregnancy play a key role in prenatal immune development but the relevance of these findings to humans is unknown. The aim this prebirth cohort study was investigate association between and composition infant’s cord peripheral blood cells over first year life. Methods Barwon Infant Study ( n =1074 infants) recruited using an unselected sampling frame. Maternal fecal samples were collected at 36 weeks flow cytometry...
Abstract Many next-generation sequencing datasets contain only relative information because of biological and technical factors that limit the total number transcripts observed for a given sample. It is not possible to interpret any one component in isolation. The field compositional data analysis has emerged with alternative methods based on log-ratio transforms. However, these often many more features than samples, thus require creative new ways reduce dimensionality data. summation parts,...
Breast cancer is a collection of multiple tissue pathologies, each with distinct molecular signature that correlates patient prognosis and response to therapy. Accurately differentiating between breast sub-types an important part clinical decision-making. Although this problem has been addressed using machine learning methods in the past, there remains unexplained heterogeneity within established cannot be resolved by commonly used classification algorithms.In paper, we propose novel deep...
Since the turn of century, technological advances have made it possible to obtain molecular profile any tissue in a cost-effective manner. Among these are sophisticated high-throughput assays that measure relative abundances microorganisms, RNA molecules, and metabolites. While data most often collected gain new insights into biological systems, they can also be used as biomarkers create clinically useful diagnostic classifiers. How best classify high-dimensional -omics remains an area...