Avinash Upadhyay

ORCID: 0000-0001-9000-8345
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Advanced Image Processing Techniques
  • Advanced Image Fusion Techniques
  • Image and Signal Denoising Methods
  • Advanced Vision and Imaging
  • Remote-Sensing Image Classification
  • Generative Adversarial Networks and Image Synthesis
  • Image Enhancement Techniques
  • Image Processing Techniques and Applications
  • Infrared Target Detection Methodologies
  • AI in cancer detection
  • Smart Agriculture and AI
  • Advanced Image and Video Retrieval Techniques
  • CCD and CMOS Imaging Sensors
  • Optical measurement and interference techniques
  • Computer Graphics and Visualization Techniques
  • Advanced Neural Network Applications
  • Spectroscopy and Chemometric Analyses
  • Digital Media Forensic Detection
  • Advanced Optical Sensing Technologies
  • Remote Sensing and Land Use
  • Visual Attention and Saliency Detection
  • Digital Holography and Microscopy
  • Face Recognition and Perception
  • Image and Video Quality Assessment
  • Remote Sensing in Agriculture

Bennett University
2024

Central Electronics Engineering Research Institute
2018-2019

Seoul National University
2019

Green Circle
2019

This paper reviews the 2nd NTIRE challenge on single image super-resolution (restoration of rich details in a low resolution image) with focus proposed solutions and results. The had 4 tracks. Track 1 employed standard bicubic downscaling setup, while Tracks 2, 3 realistic unknown downgrading operators simulating camera acquisition pipeline. were learnable through provided pairs high train images. tracks 145, 114, 101, 113 registered participants, resp., 31 teams competed final testing...

10.1109/cvprw.2018.00130 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2018-06-01

This paper reviews the first challenge on spectral image reconstruction from RGB images, i.e., recovery of whole-scene hyperspectral (HS) information a 3-channel image. The was divided into 2 tracks: "Clean" track sought HS noiseless images obtained known response function (representing spectrally-calibrated camera) while "Real World" challenged participants to recover cubes JPEG-compressed generated by an unknown function. To facilitate challenge, BGU Hyperspectral Image Database [4]...

10.1109/cvprw.2018.00138 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2018-06-01

Hyperspectral cameras are used to preserve fine spectral details of scenes that not captured by traditional RGB comprehensively quantizes radiance in images. Spectral provide additional information improves the performance numerous image based analytic applications, but due high hyperspectral hardware cost and associated physical constraints, images easily available for further processing. Motivated deep learning various computer vision we propose a 2D convolution neural network 3D...

10.1109/cvprw.2018.00129 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2018-06-01

This paper reviews the first NTIRE challenge on video super-resolution (restoration of rich details in low-resolution frames) with focus proposed solutions and results. A new REalistic Diverse Scenes dataset (REDS) was employed. The divided into 2 tracks. Track 1 employed standard bicubic downscaling setup while had realistic dynamic motion blurs. Each competition 124 104 registered participants. There were total 14 teams final testing phase. They gauge state-of-the-art super-resolution.

10.1109/cvprw.2019.00250 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2019-06-01

This paper reviews the NTIRE challenge on image colorization (estimating color information from corresponding gray image) with focus proposed solutions and results. It is first of its kind. The had 2 tracks. Track 1 takes a single as input. In 2, in addition to input image, some seeds (randomly samples latent are also provided for guiding process. operators were learnable through pairs training images. tracks 188 registered participants, 8 teams competed final testing phase.

10.1109/cvprw.2019.00276 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2019-06-01

Recognizing text from degraded and low-resolution document images is still an open challenge in the vision community. Existing recognition systems require a certain resolution fails if of or heavily noisy. This paper presents end-to-end trainable deep-learning based framework for joint optimization enhancement recognition. We are using generative adversarial network (GAN) to perform image denoising followed by deep back projection (DBPN) super-resolution use these super-resolved features...

10.1109/icdar.2019.00019 article EN 2019-09-01

10.1109/icip51287.2024.10648074 article EN 2022 IEEE International Conference on Image Processing (ICIP) 2024-09-27

Convolutional neural network based architectures have achieved decent perceptual quality super resolution on natural images for small scaling factors (2X and 4X). However, image super-resolution large magnication (8X) is an extremely challenging problem the computer vision community. In this paper, we propose a novel Improved Residual Gradual Up-Scaling Network (IRGUN) to improve of super-resolved magnification factor. IRGUN has Upsampling Residue-based Enhancment (GUREN) which comprises...

10.1109/cvprw.2018.00128 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2018-06-01

Synthesizing high quality saliency maps from noisy images is a challenging problem in computer vision and has many practical applications. Samples generated by existing techniques for detection cannot handle the noise perturbations smoothly fail to delineate salient objects present given scene. In this paper, we novel end-to-end coupled Denoising based Saliency Prediction with Generative Adversarial Network (DSAL-GAN) framework address of object images. DSAL-GAN consists two generative...

10.48550/arxiv.1904.01215 preprint EN other-oa arXiv (Cornell University) 2019-01-01

This paper presents a novel mid-wave infrared (MWIR) small target detection dataset (MWIRSTD) comprising 14 video sequences containing approximately 1053 images with annotated targets of three distinct classes objects. Captured using cooled MWIR imagers, the offers unique opportunity for researchers to develop and evaluate state-of-the-art methods object in realistic scenes. Unlike existing datasets, which primarily consist uncooled thermal or synthetic data superimposed onto background vice...

10.48550/arxiv.2406.08063 preprint EN arXiv (Cornell University) 2024-06-12

Synthetic infrared (IR) scene and target generation is an important computer vision problem as it allows the of realistic IR images targets for training testing various applications, such remote sensing, surveillance, recognition. It also helps reduce cost risk associated with collecting real-world data. This survey paper aims to provide a comprehensive overview conventional mathematical modelling-based methods deep learning-based used generating synthetic scenes targets. The discusses...

10.48550/arxiv.2408.06868 preprint EN arXiv (Cornell University) 2024-08-13
Coming Soon ...