- Cell Image Analysis Techniques
- Advanced Electron Microscopy Techniques and Applications
- Advanced Fluorescence Microscopy Techniques
- Advanced Image Processing Techniques
- Advanced Neural Network Applications
- Metabolomics and Mass Spectrometry Studies
- Advanced Image and Video Retrieval Techniques
- Photoacoustic and Ultrasonic Imaging
- Advanced Image Fusion Techniques
- Topological and Geometric Data Analysis
- Image Retrieval and Classification Techniques
- Visual Attention and Saliency Detection
- Multimodal Machine Learning Applications
- Electron and X-Ray Spectroscopy Techniques
- Image Processing Techniques and Applications
- Medical Imaging and Analysis
- Video Surveillance and Tracking Methods
- Biotin and Related Studies
- Domain Adaptation and Few-Shot Learning
- Medical Image Segmentation Techniques
- Tree-ring climate responses
- Remote-Sensing Image Classification
- Advanced Vision and Imaging
- Forest Ecology and Biodiversity Studies
- Image Enhancement Techniques
Harvard University Press
2019-2024
Harvard University
2020-2024
Amazon (United States)
2023
Multi-modality (MM) image fusion aims to render fused images that maintain the merits of different modalities, e.g., functional highlight and detailed textures. To tackle challenge in modeling cross-modality features decomposing desirable modality-specific modality-shared features, we propose a novel Correlation-Driven feature Decomposition Fusion (CDDFuse) network. Firstly, CDDFuse uses Restormer blocks extract shallow features. We then introduce dual-branch Transformer-CNN extractor with...
Abstract We acquired a rapidly preserved human surgical sample from the temporal lobe of cerebral cortex. stained 1 mm 3 volume with heavy metals, embedded it in resin, cut more than 5000 slices at ∼30 nm and imaged these sections using high-speed multibeam scanning electron microscope. used computational methods to render three-dimensional structure containing 57,216 cells, hundreds millions neurites 133.7 million synaptic connections. The 1.4 petabyte microscopy volume, segmented cell...
Existing leading methods for spectral reconstruction (SR) focus on designing deeper or wider convolutional neural networks (CNNs) to learn the end-to-end mapping from RGB image its hyperspectral (HSI). These CNN-based achieve impressive restoration performance while showing limitations in capturing long-range dependencies and self-similarity prior. To cope with this problem, we propose a novel Transformer-based method, Multi-stage Spectral-wise Transformer (MST++), efficient reconstruction....
To fully understand how the human brain works, knowledge of its structure at high resolution is needed. Presented here a computationally intensive reconstruction ultrastructure cubic millimeter temporal cortex that was surgically removed to gain access an underlying epileptic focus. It contains about 57,000 cells, 230 millimeters blood vessels, and 150 million synapses comprises 1.4 petabytes. Our analysis showed glia outnumber neurons 2:1, oligodendrocytes were most common cell, deep layer...
Guided depth super-resolution (GDSR) is an essential topic in multi-modal image processing, which reconstructs high-resolution (HR) maps from low-resolution ones collected with suboptimal conditions the help of HR RGB images same scene. To solve challenges interpreting working mechanism, extracting cross-modal features and texture over-transferred, we propose a novel Discrete Cosine Transform Network (DCTNet) to alleviate problems three aspects. First, (DCT) module multi-channel by using DCT...
Deep convolutional neural networks (CNNs) have pushed forward the frontier of super-resolution (SR) research. However, current CNN models exhibit a major flaw: they are biased towards learning low-frequency signals. This bias becomes more problematic for image SR task which targets reconstructing all fine details and textures. To tackle this challenge, we propose to improve high-frequency features both locally globally introduce two novel architectural units existing models. Specifically,...
This paper reviews the third biennial challenge on spectral reconstruction from RGB images, i.e., recovery of whole-scene hyperspectral (HS) information a 3-channel image. presents "ARAD_1K" data set: new, larger-than-ever natural image set containing 1,000 images. Challenge participants were required to recover hyper-spectral synthetically generated JPEG-compressed images simulating capture by known calibrated camera, operating under partially parameters, in setting which includes...
Image super-resolution (SR) is a fast-moving field with novel architectures attracting the spotlight. However, most SR models were optimized dated training strategies. In this work, we revisit popular RCAN model and examine effect of different options in SR. Surprisingly (or perhaps as expected), show that can outperform or match nearly all CNN-based published after on standard benchmarks proper strategy minimal architecture change. Besides, although very large more than four hundred...
Abstract Three-dimensional (3D) reconstruction of living brain tissue down to an individual synapse level would create opportunities for decoding the dynamics and structure–function relationships brain’s complex dense information processing network; however, this has been hindered by insufficient 3D resolution, inadequate signal-to-noise ratio prohibitive light burden in optical imaging, whereas electron microscopy is inherently static. Here we solved these challenges developing integrated...
Abstract Mapping neuronal networks is a central focus in neuroscience. While volume electron microscopy (vEM) can reveal the fine structure of (connectomics), it does not provide molecular information to identify cell types or functions. We developed an approach that uses fluorescent single-chain variable fragments (scFvs) perform multiplexed detergent-free immunolabeling and volumetric-correlated-light-and-electron-microscopy on same sample. generated eight scFvs targeting brain markers....
In this paper, we present the results of MitoEM challenge on mitochondria 3D instance segmentation from electron microscopy images, organized in conjunction with IEEE-ISBI 2021 conference. Our benchmark dataset consists two large-scale volumes, one human and rat cortex tissue, which are 1,986 times larger than previously used datasets. At time paper submission, 257 participants had registered for challenge, 14 teams submitted their results, six participated workshop. Here, eight...
We present PyTorch Connectomics (PyTC), an open-source deep-learning framework for the semantic and instance segmentation of volumetric microscopy images, built upon PyTorch. demonstrate effectiveness PyTC in field connectomics, which aims to segment reconstruct neurons, synapses, other organelles like mitochondria at nanometer resolution understanding neuronal communication, metabolism, development animal brains. is a scalable flexible toolbox that tackles datasets different scales supports...
Multimodal representation learning for images with paired raw texts can improve the usability and generality of learned semantic concepts while significantly reducing annotation costs. In this paper, we explore design space loss functions in visual-linguistic pretraining frameworks propose a novel Relaxed Contrastive (ReCo) objective, which act as drop-in replacement widely used InfoNCE loss. The key insight ReCo is to allow relaxed negative by not penalizing unpaired multimodal samples...
Evaluation practices for image super-resolution (SR) use a single-value metric, the PSNR or SSIM, to determine model performance. This provides little insight into source of errors and behavior. Therefore, it is beneficial move beyond conventional approach reconceptualize evaluation with interpretability as our main priority. We focus on thorough error analysis from variety perspectives. Our key contribution leverage texture classifier, which enables us assign patches semantic labels,...
The detection of interesting patterns in large high-dimensional datasets is difficult because their dimensionality and pattern complexity. Therefore, analysts require automated support for the extraction relevant patterns. In this paper, we present FDive, a visual active learning system that helps to create visually explorable relevance models, assisted by pattern-based similarity. We use small set user-provided labels rank similarity measures, consisting feature descriptor distance function...
Mapping neuronal networks that underlie behavior has become a central focus in neuroscience. While serial section electron microscopy (ssEM) can reveal the fine structure of (connectomics), it does not provide molecular information helps identify cell types or their functional properties. Volumetric correlated light and (vCLEM) combines ssEM volumetric fluorescence to incorporate labeling into datasets. We developed an approach uses small fluorescent single-chain variable fragment (scFv)...
Abstract Ecological data are collected and shared at an increasingly rapid pace, but it is often in inconsistent untraceable processed forms. Images of wood contain a wealth information such as colours textures most commonly reduced to ring‐width measurements before they can be various common file formats. Archiving digital images samples libraries, which have been developed for ecological analysis publicly available, remains the exception. We Wood Image Analysis Dataset (WIAD), open‐source...
Abstract Complex wiring between neurons underlies the information-processing network enabling all brain functions, including cognition and memory. For understanding how is structured, processes information, changes over time, comprehensive visualization of architecture living tissue with its cellular molecular components would open up major opportunities. However, electron microscopy (EM) provides nanometre-scale resolution required for full in-silico reconstruction 1–5 , yet limited to...
Instance segmentation for unlabeled imaging modalities is a challenging but essential task as collecting expert annotation can be expensive and time-consuming. Existing works segment new modality by either deploying pre-trained model optimized on diverse training data or conducting domain translation image two independent steps. In this work, we propose novel Cyclic Segmentation Generative Adversarial Network (CySGAN) that conducts instance jointly using unified framework. Besides the...
3D instance segmentation for unlabeled imaging modalities is a challenging but essential task as collecting expert annotation can be expensive and time-consuming. Existing works segment new modality by either deploying pre-trained models optimized on diverse training data or sequentially conducting image translation with two relatively independent networks. In this work, we propose novel <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Cyclic...