- Advanced Steganography and Watermarking Techniques
- Digital Media Forensic Detection
- Privacy-Preserving Technologies in Data
- Adversarial Robustness in Machine Learning
- Chaos-based Image/Signal Encryption
- Generative Adversarial Networks and Image Synthesis
- Privacy, Security, and Data Protection
- Face recognition and analysis
- Biometric Identification and Security
- Internet Traffic Analysis and Secure E-voting
- Anomaly Detection Techniques and Applications
- Cryptography and Data Security
- Topic Modeling
- User Authentication and Security Systems
- SARS-CoV-2 detection and testing
- Advanced biosensing and bioanalysis techniques
- Speech and Audio Processing
- Speech Recognition and Synthesis
- Video Analysis and Summarization
- Natural Language Processing Techniques
- Music and Audio Processing
- Video Surveillance and Tracking Methods
- Advanced Malware Detection Techniques
- Digital Rights Management and Security
- Biosensors and Analytical Detection
National Institute of Informatics
2016-2025
The University of Tokyo
2019-2025
The Graduate University for Advanced Studies, SOKENDAI
2015-2024
Nippon Soken (Japan)
2023
Tokyo University of Information Sciences
2021-2022
Research Organization of Information and Systems
2007-2019
Hitotsubashi University
2010-2019
National Institute for Japanese Language and Linguistics
2014
Ho Chi Minh City University of Science
2011
University of Electro-Communications
2004-2009
This paper presents a method to automatically and efficiently detect face tampering in videos, particularly focuses on two recent techniques used generate hyper-realistic forged videos: Deepfake Face2Face. Traditional image forensics are usually not well suited videos due the compression that strongly degrades data. Thus, this follows deep learning approach networks, both with low number of layers focus mesoscopic properties images. We evaluate those fast networks an existing dataset we have...
Recent advances in media generation techniques have made it easier for attackers to create forged images and videos. State-of-the-art methods enable the real-time creation of a version single video obtained from social network. Although numerous been developed detecting videos, they are generally targeted at certain domains quickly become obsolete as new kinds attacks appear. The method introduced this paper uses capsule network detect various spoofs, replay using printed or recorded videos...
Detecting manipulated images and videos is an important topic in digital media forensics. Most detection methods use binary classification to determine the probability of a query being manipulated. Another locating regions (i.e., performing segmentation), which are mostly created by three commonly used attacks: removal, copy-move, splicing. We have designed convolutional neural network that uses multi-task learning approach simultaneously detect locate for each query. Information gained one...
This paper presents a deep-learning method for distinguishing computer generated graphics from real photographic images. The proposed uses Convolutional Neural Network (CNN) with custom pooling layer to optimize current best-performing algorithms feature extraction scheme. Local estimates of class probabilities are computed and aggregated predict the label whole picture. We evaluate our work on recent photo-realistic show that it outperforms state art methods both local full image classification.
The revolution in computer hardware, especially graphics processing units and tensor units, has enabled significant advances artificial intelligence algorithms. In addition to their many beneficial applications daily life business, computer-generated/manipulated images videos can be used for malicious purposes that violate security systems, privacy, social trust. deepfake phenomenon its variations enable a normal user use his or her personal easily create fake of anybody from short real...
Although voice conversion (VC) algorithms have achieved remarkable success along with the development of machine learning, superior performance is still difficult to achieve when using nonparallel data. In this paper, we propose a cycle-consistent adversarial network (CycleGAN) for data-based VC training. A CycleGAN generative (GAN) originally developed unpaired image-to-image translation. subjective evaluation inter-gender demonstrated that proposed method significantly outperformed based...
The social media revolution has produced a plethora of web services to which users can easily upload and share multimedia documents.Despite the popularity convenience such services, sharing inherently personal data, including speech raises obvious security privacy concerns.In particular, user's data may be acquired used with synthesis systems produce high-quality utterances reflect same speaker identity.These then attack verification systems.One solution mitigate these concerns involves...
Thanks to the growing availability of spoofing databases and rapid advances in using them, systems for detecting voice attacks are becoming more capable, error rates close zero being reached ASVspoof2015 database.However, speech synthesis conversion paradigms that not considered database appearing.Such examples include direct waveform modelling generative adversarial networks.We also need investigate feasibility training only low-quality found data.For purpose, we developed a networkbased...
The proliferation of deepfake media is raising concerns among the public and relevant authorities. It has become essential to develop countermeasures against forged faces in social media. This paper presents a comprehensive study on two new countermeasure tasks: multi-face forgery detection segmentation in-the-wild. Localizing multiple human unrestricted natural scenes far more challenging than traditional recognition task. To promote these tasks, we have created first large-scale dataset...
Deep neural networks are vulnerable to adversarial examples (AEs), which have transferability: AEs generated for the source model can mislead another (target) model's predictions. However, transferability has not been understood in terms of class target predictions were misled (i.e., class-aware transferability). In this paper, we differentiate cases a predicts same wrong as ("same mistake") or different ("different analyze and provide an explanation mechanism. We find that (1) tend cause...
Discriminating between computer-generated images (CGIs) and photographic (PIs) is not a new problem in digital image forensics. However, with advances rendering techniques supported by strong hardware generative adversarial networks, CGIs are becoming indistinguishable from PIs both human computer perception. This means that malicious actors can use for spoofing facial authentication systems, impersonating other people, creating fake news to be spread on social networks. The methods...
Deep-learning-based technologies such as deepfakes ones have been attracting widespread attention in both society and academia, particularly used to synthesize forged face images. These automatic professional-skill-free manipulation can be replace the an original image or video with any target object while maintaining expression demeanor. Since human faces are closely related identity characteristics, maliciously disseminated manipulated videos could trigger a crisis of public trust media...
In the era of large AI models, intricate architectures and vast parameter sets models such as language (LLMs) present significant challenges for effective quality management (AIQM). This paper investigates assurance a specific LLM-based product: ChatGPT-based sentiment analysis. The study focuses on stability issues, examining both operation robustness ChatGPT’s underlying large-scale model. Through experimental analysis benchmark datasets analysis, findings highlight analysis’s...
Sosuke Nishikawa, Ryokan Ri, Ikuya Yamada, Yoshimasa Tsuruoka, Isao Echizen. Proceedings of the 2022 Conference North American Chapter Association for Computational Linguistics: Human Language Technologies. 2022.
The reliability of remote identity-proofing systems (i.e., electronic Know Your Customer, or eKYC, systems) is challenged by the development deepfake generation tools, which can be used to create fake videos that are difficult detect using existing detection models and indistinguishable facial recognition systems. This poses a serious threat eKYC danger individuals' personal information property. Existing datasets not particularly appropriate for developing evaluating systems, require...
Face authentication is now widely used, especially on mobile devices, rather than using a personal identification number or an unlock pattern, due to its convenience. It has thus become tempting target for attackers presentation attack. Traditional attacks use facial images videos of the victim. Previous work proven existence master faces, i.e., faces that match multiple enrolled templates in face recognition systems, and their extends ability attacks. In this paper, we report extensive...
Deepfakes pose an evolving cybersecurity threat that calls for the development of automated countermeasures. While considerable forensic research has been devoted to detection and localisation deepfakes, solutions 'fake real' reversal are yet be developed. In this study, we introduce concept cyber vaccination conferring immunity deepfakes. other words, aim impart self-healing ability face media so original content is possible recovered after manipulated by AI-based deepfake technology....
Steganography, the art of information hiding, has continually evolved across visual, auditory and linguistic domains, adapting to ceaseless interplay between steganographic concealment steganalytic revelation. This study seeks extend horizons what constitutes a viable medium by introducing paradigm in robotic motion control. Based on observation robot's inherent sensitivity changes its environment, we propose methodology encode messages as environmental stimuli influencing motions agent...