Nooshin Ghavami

ORCID: 0000-0002-4310-1245
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Medical Image Segmentation Techniques
  • Advanced Neural Network Applications
  • AI in cancer detection
  • Prostate Cancer Diagnosis and Treatment
  • Fetal and Pediatric Neurological Disorders
  • Generative Adversarial Networks and Image Synthesis
  • Cleft Lip and Palate Research
  • Radiomics and Machine Learning in Medical Imaging
  • Medical Imaging and Analysis
  • Prenatal Screening and Diagnostics
  • Autopsy Techniques and Outcomes
  • MRI in cancer diagnosis
  • Artificial Intelligence in Healthcare and Education
  • Advanced MRI Techniques and Applications
  • Human Motion and Animation
  • Computer Graphics and Visualization Techniques
  • Advanced X-ray and CT Imaging
  • Atrial Fibrillation Management and Outcomes
  • Advanced Neuroimaging Techniques and Applications
  • Cardiac electrophysiology and arrhythmias
  • Cardiac Arrhythmias and Treatments

King's College London
2014-2022

Wellcome / EPSRC Centre for Interventional and Surgical Sciences
2019-2020

University College London
2018-2020

St Thomas' Hospital
2014

One of the fundamental challenges in supervised learning for multimodal image registration is lack ground-truth voxel-level spatial correspondence. This work describes a method to infer transformation from higher-level correspondence information contained anatomical labels. We argue that such labels are more reliable and practical obtain reference sets pairs than Typical interest may include solid organs, vessels, ducts, structure boundaries other subject-specific ad hoc landmarks. The...

10.1016/j.media.2018.07.002 article EN cc-by Medical Image Analysis 2018-07-04

Spatially aligning medical images from different modalities remains a challenging task, especially for intraoperative applications that require fast and robust algorithms. We propose weakly-supervised, label-driven formulation learning 3D voxel correspondence higher-level label correspondence, thereby bypassing classical intensity-based image similarity measures. During training, convolutional neural network is optimised by outputting dense displacement field (DDF) warps set of available...

10.1109/isbi.2018.8363756 article EN 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI) 2018-04-01

Convolutional neural networks (CNNs) have recently led to significant advances in automatic segmentations of anatomical structures medical images, and a wide variety network architectures are now available the research community. For applications such as segmentation prostate magnetic resonance images (MRI), results PROMISE12 online algorithm evaluation platform demonstrated differences between best-performing algorithms terms numerical accuracy using standard metrics Dice score boundary...

10.1016/j.media.2019.101558 article EN cc-by Medical Image Analysis 2019-09-11

Advances in artificial intelligence (AI) have demonstrated potential to improve medical diagnosis. We piloted the end-to-end automation of mid-trimester screening ultrasound scan using AI-enabled tools.A prospective method comparison study was conducted. Participants had both standard and AI-assisted US scans performed. The AI tools automated image acquisition, biometric measurement, report production. A feedback survey captured sonographers' perceptions scanning.Twenty-three subjects were...

10.1002/pd.6059 article EN cc-by Prenatal Diagnosis 2021-10-18

Image guidance systems that register scans of the prostate obtained using transrectal ultrasound (TRUS) and magnetic resonance imaging are becoming increasingly popular as a means enabling tumor-targeted cancer biopsy treatment. However, intraoperative segmentation TRUS images to define three-dimensional (3-D) geometry remains necessary task in existing systems, which often require significant manual interaction subject interoperator variability. Therefore, automating this step would lead...

10.1117/1.jmi.6.1.011003 article EN cc-by Journal of Medical Imaging 2018-08-21

Automatic segmentation of the placenta in fetal ultrasound (US) is challenging due to (i) high diversity appearance, (ii) restricted quality US resulting highly variable reference annotations, and (iii) limited field-of-view prohibiting whole assessment at late gestation. In this work, we address these three challenges with a multi-task learning approach that combines classification placental location (e.g., anterior, posterior) semantic single convolutional neural network. Through task...

10.1016/j.media.2022.102639 article EN cc-by Medical Image Analysis 2022-09-28

This paper, originally published on 12 March 2018, was replaced with a corrected/revised version 1 June 2018. If you downloaded the original PDF but are unable to access revision, please contact SPIE Digital Library Customer Service for assistance. Clinically important targets ultrasound-guided prostate biopsy and cancer focal therapy can be defined MRI. However, localizing these transrectal ultrasound (TRUS) remains challenging. Automatic segmentation of intraoperative TRUS images is an...

10.1117/12.2293300 article EN 2018-03-12

Organ morphology is a key indicator for prostate disease diagnosis and prognosis. For instance, In longitudinal study of cancer patients under active surveillance, the volume, boundary smoothness their changes are closely monitored on time-series MR image data. this paper, we describe new framework forecasting morphological changes, as ability to detect such earlier than what currently possible may enable timely treatment or avoiding unnecessary confirmatory biopsies. work, an efficient...

10.1109/isbi48211.2021.9433798 preprint EN 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI) 2021-04-13

Background: Ultrasound (US) imaging is characterised by high levels of operator subjectivity and variability. Recent advances in artificial intelligence (AI) have demonstrated the potential to reduce both factors. This study pilots end-to-end automation multiple elements mid-trimester obstetric screening US scan using AI-enabled tools.Methods: A single centre, prospective method comparison was conducted. Participants had standard manual AI-assisted scans performed independently different...

10.2139/ssrn.3795326 article EN SSRN Electronic Journal 2021-01-01

We present a computational method for real-time, patient-specific simulation of 2D ultrasound (US) images. The uses large number tracked images to learn function that maps position and orientation the transducer This is first step towards realistic simulations will enable improved training retrospective examination complex cases. Our models can simulate image in under 4ms (well within real-time constraints), produce simulated preserve content (anatomical structures artefacts) real

10.48550/arxiv.2005.04931 preprint EN other-oa arXiv (Cornell University) 2020-01-01

Automatic segmentation of the placenta in fetal ultrasound (US) is challenging due to (i) high diversity appearance, (ii) restricted quality US resulting highly variable reference annotations, and (iii) limited field-of-view prohibiting whole assessment at late gestation. In this work, we address these three challenges with a multi-task learning approach that combines classification placental location (e.g., anterior, posterior) semantic single convolutional neural network. Through task...

10.48550/arxiv.2206.14746 preprint EN other-oa arXiv (Cornell University) 2022-01-01
Coming Soon ...