- Adversarial Robustness in Machine Learning
- Biometric Identification and Security
- Face recognition and analysis
- Digital Media Forensic Detection
- Anomaly Detection Techniques and Applications
- Face and Expression Recognition
- Bacillus and Francisella bacterial research
- Generative Adversarial Networks and Image Synthesis
- User Authentication and Security Systems
- Physical Unclonable Functions (PUFs) and Hardware Security
- Advanced Malware Detection Techniques
- Advanced Electron Microscopy Techniques and Applications
- Advanced Image Processing Techniques
- Domain Adaptation and Few-Shot Learning
- Integrated Circuits and Semiconductor Failure Analysis
- Electron and X-Ray Spectroscopy Techniques
- Forensic Fingerprint Detection Methods
- Advanced Steganography and Watermarking Techniques
- Advanced Neural Network Applications
- Advancements in Photolithography Techniques
- Forensic and Genetic Research
- Stock Market Forecasting Methods
- Advanced Optical Sensing Technologies
- Image and Signal Denoising Methods
- CCD and CMOS Imaging Sensors
Indian Institute of Science Education and Research, Bhopal
2022-2025
IBM (United States)
2025
Shanghai Key Laboratory of Trustworthy Computing
2025
Zimmer Biomet (United States)
2025
Indian Institute of Technology Jodhpur
2022-2023
University at Buffalo, State University of New York
2021-2023
Indraprastha Institute of Information Technology Delhi
2016-2022
Indian Institute of Technology Delhi
2015-2022
Texas A&M University – Kingsville
2020-2022
National Institute of Technology Kurukshetra
2022
Deep neural network (DNN) architecture based models have high expressive power and learning capacity. However, they are essentially a black box method since it is not easy to mathematically formulate the functions that learned within its many layers of representation. Realizing this, researchers started design methods exploit drawbacks deep algorithms questioning their robustness exposing singularities. In this paper, we attempt unravel three aspects related DNNs for face recognition: (i)...
Face spoofing can be performed in a variety of ways such as replay attack, print and mask attack to deceive an automated recognition algorithm. To mitigate the effect attempts, face anti-spoofing approaches aim distinguish between genuine samples spoofed samples. The focus this paper is detect attempts via Haralick texture features. proposed algorithm extracts block-wise features from redundant discrete wavelet transformed frames obtained video. Dimensionality feature vector reduced using...
Face recognition systems are susceptible to presentation attacks such as printed photo attacks, replay and 3D mask attacks. These primarily studied in visible spectrum, aim obfuscate or impersonate a person's identity. This paper presents unique multispectral video face database for attack using latex masks. The proposed Multispectral Latex Mask based Video Presentation Attack (MLFP) contains 1350 videos visible, near infrared, thermal spectrums. Since the consists of subjects without any...
Biometric systems can be attacked in several ways and the most common being spoofing input sensor. Therefore, anti-spoofing is one of essential prerequisite against attacks on biometric systems. For face recognition it even more vulnerable as image capture non-contact based. Several methods have been proposed literature for both contact based modalities often using video to study temporal characteristics a real vs. spoofed signal. This paper presents novel multi-feature evidence aggregation...
Advancements in smartphone applications have empowered even non-technical users to perform sophisticated operations such as morphing faces few tap operations. While enablements positive effects, a negative side, now anyone can digitally attack face (biometric) recognition systems. For example, swapping application of Snapchat easily create "swapped" identities and circumvent system. This research presents novel database, termed SWAPPED - Digital Attack Video Face Database, prepared using...
Face recognition algorithms have demonstrated very high performance, suggesting suitability for real world applications. Despite the enhanced accuracies, robustness of these against attacks and bias has been challenged. This paper summarizes different ways in which a face algorithm is challenged, can severely affect its intended working. Different types such as physical presentation attacks, disguise/makeup, digital adversarial morphing/tampering using GANs discussed. We also present...
Iris recognition systems may be vulnerable to presentation attacks such as textured contact lenses, print attacks, and synthetic iris images. Increasing applications of have raised the importance efficient attack detection algorithms. In this paper, we propose a novel algorithm for detecting using combination handcrafted deep learning based features. The proposed combines local global Haralick texture features in multi-level Redundant Discrete Wavelet Transform domain with VGG encode...
High performance of deep neural network based systems have attracted many applications in object recognition and face recognition. However, researchers also demonstrated them to be highly sensitive adversarial perturbation hence, tend unreliable lack robustness. While most the research on focuses image specific attacks, recently, image-agnostic Universal perturbations are proposed which learn pattern over training distribution broader impact real-world security applications. Such attacks can...
Deep learning models are widely used for various purposes such as face recognition and speech recognition. However, researchers have shown that these vulnerable to adversarial attacks. These attacks compute perturbations generate images decrease the performance of deep models. In this research, we developed a toolbox, termed SmartBox, benchmarking attack detection mitigation algorithms against SmartBox is python based toolbox which provides an open source implementation algorithms. Extended...
Several computer vision applications such as object detection and face recognition have started to completely rely on deep learning based architectures. These architectures, when paired with appropriate loss functions optimizers, produce state-of-the-art results in a myriad of problems. On the other hand, advent "blockchain", cybersecurity industry has developed new sense trust which was earlier missing from both technical commercial perspectives. Employment cryptographic hash well...
Deep learning algorithms provide state-of-the-art results on a multitude of applications. However, it is also well established that they are highly vulnerable to adversarial perturbations. It often believed the solution this vulnerability deep systems must come from networks only. Contrary common understanding, in article, we propose non-deep approach searches over set well-known image transforms such as Discrete Wavelet Transform and Sine Transform, classifying features with support vector...
Blockchain has emerged as a leading technology that ensures security in distributed framework. Recently, it been shown blockchain can be used to convert traditional blocks of any deep learning models into secure systems. In this research, we model trained biometric recognition system an architecture which leverages the provide fault tolerant access environment. The advantage proposed approach is tampering one particular component alerts whole and helps easy identification `any' possible...
Advancements in machine learning and deep techniques have led to the development of sophisticated accurate face recognition systems. However, for past few years, researchers are exploring vulnerabilities these systems towards digital attacks. Creation digitally altered images has become an easy task with availability various image editing tools mobile application such as Snapchat. Morphing based attacks used elude gain identity legitimate users by fooling networks. In this research, partial...
Driven by the advances in deep learning, highly photo-realistic techniques capable of switching identity and expression faces have emerged. Cheap access to computing has brought such technology within reach anyone with a computer Internet including people sinister motives. To detect these forgeries, we present novel compression resilient approach for deepfake detection videos. The proposed employs motion magnification as pre-processing step amplify temporal inconsistencies common forged...
Several successful adversarial attacks have demonstrated the vulnerabilities of deep learning algorithms. These are detrimental in building based dependable AI applications. Therefore, it is imperative to build a defense mechanism protect integrity models. In this paper, we present novel "defense layer" network which aims block generation noise and prevents an attack black-box gray-box settings. The parameter-free layer, when applied any convolutional network, helps achieving protection...
Deep learning solutions are vulnerable to adversarial perturbations and can lead a "frog" image be misclassified as "deer" or random pattern into "guitar". Adversarial attack generation algorithms generally utilize the knowledge of database CNN model craft noise. In this research, we present novel scheme termed Camera Inspired Perturbations generate The proposed approach relies on noise embedded in due environmental factors camera incorporated. We extract these patterns using filtering...
Abstract Bacterial pathogenicity has traditionally focused on gene-level content with experimentally confirmed functional properties. Hence, significant inferences are made based similarity to known pathotypes and DNA-based genomic subtyping for risk. Herein, we achieved de novo prediction of human virulence in Klebsiella pneumoniae by expanding genes spatially proximal gene discoveries linked domain architectures across all prokaryotes. This approach identified ontology functions not...
With the advancements in technology and growing popularity of facial photo editing social media landscape, tools such as face swapping morphing have become increasingly accessible to general public. It opens up possibilities for different kinds presentation attacks, which can be taken advantage by impostors gain unauthorized access a biometric system. Moreover, wide availability 3D printers has caused shift from print attacks mask attacks. increasing types it is necessary come with generic...