- Ultrasound in Clinical Applications
- Advanced Image and Video Retrieval Techniques
- Flow Measurement and Analysis
- Radiomics and Machine Learning in Medical Imaging
- Radiology practices and education
- Lung Cancer Diagnosis and Treatment
- Advanced Neural Network Applications
- Phonocardiography and Auscultation Techniques
- Radiation Dose and Imaging
- Industrial Vision Systems and Defect Detection
- Coronary Interventions and Diagnostics
- Biometric Identification and Security
- Speech Recognition and Synthesis
- Optical Imaging and Spectroscopy Techniques
- Ultrasound and Hyperthermia Applications
- EEG and Brain-Computer Interfaces
- Childhood Cancer Survivors' Quality of Life
- Solar Radiation and Photovoltaics
- Energy and Environment Impacts
- Speech and dialogue systems
- Sugarcane Cultivation and Processing
- Prostate Cancer Treatment and Research
- Speech and Audio Processing
- Heart Failure Treatment and Management
- Diabetes Treatment and Management
University of Waterloo
2022-2024
Breathe California of Sacramento Emigrant Trails
2023
Western University
2022
Anglia Ruskin University
2022
Essex Cardiothoracic Centre
2022
Rutgers, The State University of New Jersey
2022
Los Angeles Mission College
2021-2022
Princeton University
2021
Chonnam National University
2021
Sunnybrook Health Science Centre
2016-2019
Real-time speech interaction, serving as a fundamental interface for human-machine collaboration, holds immense potential. However, current open-source models face limitations such high costs in voice data collection, weakness dynamic control, and limited intelligence. To address these challenges, this paper introduces Step-Audio, the first production-ready solution. Key contributions include: 1) 130B-parameter unified speech-text multi-modal model that achieves understanding generation,...
Objective: We investigated correspondence between symptom severity and bothersomeness in patients with advanced cancer. Background: Symptom is commonly assessed clinical cancer settings, but of these symptoms less often measured. Methods: Participants enrolled a cluster-randomized trial early palliative care completed the Edmonton Assessment System (ESAS) quality life at end (QUAL-E) measure as part their baseline assessment. For each symptom, we examined being indicated most severe on ESAS...
Pneumothorax is a potentially life-threatening condition that can be rapidly and accurately assessed via the lung sliding artefact generated using ultrasound (LUS). Access to LUS challenged by user dependence shortage of training. Image classification deep learning methods automate interpretation in has not been thoroughly studied for sliding. Using labelled dataset from 2 academic hospitals, clinical B-mode (also known as brightness or two-dimensional mode) videos featuring both presence...
Deep learning (DL) models for medical image classification frequently struggle to generalize data from outside institutions. Additional clinical are also rarely collected comprehensively assess and understand model performance amongst subgroups. Following the development of a single-center identify lung sliding artifact on ultrasound (LUS), we pursued validation strategy using external LUS data. As annotated relatively scarce—compared other imaging data—we adopted novel technique optimize...
The spatial layout of scene images is essential to recognizing them. Without considering information, the deep convolutional neural networks could not achieve satisfied performance on recognition. In this paper a novel network architecture, namely randomized pooling (RS-Pooling) layer, proposed incorporate information into model. By partitioning feature maps via patterns, RS-Pooling layer offers probability handle various image layouts. Moreover, maxout objective function adopted adaptively...
Abstract There can be numerous electronic components on a given PCB, making the task of visual inspection to detect defects very time‐consuming and prone error, especially at scale. has thus been significant interest in automatic PCB component detection, particularly leveraging deep learning. While neural networks are able perform such detection with greater accuracy, these typically require high computational resources, limiting their feasibility real‐world use cases, which often involve...
OBJECTIVES: To evaluate the accuracy of a bedside, real-time deployment deep learning (DL) model capable distinguishing between normal (A line pattern) and abnormal (B lung parenchyma on ultrasound (LUS) in critically ill patients. DESIGN: Prospective, observational study evaluating performance previously trained LUS DL model. Enrolled patients received examination with simultaneous predictions using portable device. Clip-level were analyzed compared blinded expert review for A versus B...
Self-supervised pretraining has been observed to improve performance in supervised learning tasks medical imaging. This study investigates the utility of self-supervised prior conducting fine-tuning for downstream task lung sliding classification M-mode ultrasound images. We propose a novel pairwise relationship that couples images constructed from same B-mode image and investigate data augmentation procedure specific ultrasound. The results indicate yields better than full supervision, most...
In this study, we investigated whether self-supervised pretraining could produce a neural network feature extractor applicable to multiple classification tasks in B-mode lung ultrasound analysis. When fine-tuning on three tasks, pretrained models resulted an improvement of the average across-task area under receiver operating characteristic curve (AUC) by 0.032 and 0.061 local external test sets respectively. Compact nonlinear classifiers trained features outputted single model did not...
Annotating large medical imaging datasets is an arduous and expensive task, especially when the in question are not organized according to deep learning goals. Here, we propose a method that exploits hierarchical organization of annotating tasks optimize efficiency. We trained machine model accurately distinguish between one two classes lung ultrasound (LUS) views using 2908 clips from larger dataset. Partitioning remaining dataset by view would reduce downstream labelling efforts enabling...
There can be numerous electronic components on a given PCB, making the task of visual inspection to detect defects very time-consuming and prone error, especially at scale. has thus been significant interest in automatic PCB component detection, particularly leveraging deep learning. However, neural networks typically require high computational resources, possibly limiting their feasibility real-world use cases manufacturing, which often involve high-volume high-throughput detection with...
Background: Memory efficiency is influenced by the modalities of acquisition and retrieval. The recall accuracy read or voiced material differs depending on whether given verbally in writing. medial prefrontal cortex (mPFC) critical for both attentional allocation short-term memory, suggesting that different memory are associated with distinct mPFC processes activation patterns. Methods: Near-infrared spectroscopy (NIRS) was used to monitor oxygenation parameters 30 healthy subjects during...
Audio source separation, utilized for speech denoising and music production, involves extracting individual sources from a mixed track. The project goal was to reduce training times separation models with limited computing resources while improving accuracy, measured through signal-to-distortion ratio (SDR) signal-to-interference (SIR). We replaced the Bi-directional Long Short-Term Memory (BiLSTM) block transformer in open-source Open-Unmix model, it on MUSDB18-HQ dataset. Ultimately,...
Due to COVID-19, there's been a burst in online examinations. The main integrity safeguard so far is human proctoring, which requires trained supervisors constantly monitor all test-takers' videos and audios through webcams. To overcome such costliness ineffectiveness, we have designed an automated proctoring system that effective, nonintrusive, adaptable different testing scenarios. Our approach presents novel combination of (a) gaze view tracking module using mathematical 3D motion formula...
As of today, the average efficiency household solar panels is less than 20 percent, so percent energy converted into electricity. Our goal to explore various techniques that not only enhance this but also are scalable households. We plan use reinforcement learning technique provides centralized intelligence for controlling a large set trackers panels. Reinforcement proven machine unknown environments while maximizing reward. This desirable because each household's environment different....
<p>Figures S1-2: LSD1 reprogramming in CRPC. Figure S3: Clinical relevance of CENPE Figures S4-8: Regulation and Function S9: Inhibition xenograft model.</p>
Self-supervised pretraining has been observed to improve performance in supervised learning tasks medical imaging. This study investigates the utility of self-supervised prior conducting fine-tuning for downstream task lung sliding classification M-mode ultrasound images. We propose a novel pairwise relationship that couples images constructed from same B-mode image and investigate data augmentation procedure specific ultrasound. The results indicate yields better than full supervision, most...
In this study, we investigated whether self-supervised pretraining could produce a neural network feature extractor applicable to multiple classification tasks in B-mode lung ultrasound analysis. When fine-tuning on three tasks, pretrained models resulted an improvement of the average across-task area under receiver operating curve (AUC) by 0.032 and 0.061 local external test sets respectively. Compact nonlinear classifiers trained features outputted single model did not improve performance...
Introduction: Lung ultrasound (LUS) shows diagnostic superiority to chest radiography in the ICU setting however its widespread deployment is challenged by limited user base and dependence on clinical expertise. Artificial intelligence models have been developed overcome these barriers through LUS automation; however, bedside pragmatic performance has never demonstrated. In this study, we aimed evaluate accuracy of a bedside, real-time deep learning model capable distinguishing between...