Isao Echizen

ORCID: 0000-0003-4908-1860
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Advanced Steganography and Watermarking Techniques
  • Digital Media Forensic Detection
  • Privacy-Preserving Technologies in Data
  • Adversarial Robustness in Machine Learning
  • Chaos-based Image/Signal Encryption
  • Generative Adversarial Networks and Image Synthesis
  • Privacy, Security, and Data Protection
  • Face recognition and analysis
  • Biometric Identification and Security
  • Internet Traffic Analysis and Secure E-voting
  • Anomaly Detection Techniques and Applications
  • Cryptography and Data Security
  • Topic Modeling
  • User Authentication and Security Systems
  • SARS-CoV-2 detection and testing
  • Advanced biosensing and bioanalysis techniques
  • Speech and Audio Processing
  • Speech Recognition and Synthesis
  • Video Analysis and Summarization
  • Natural Language Processing Techniques
  • Music and Audio Processing
  • Video Surveillance and Tracking Methods
  • Advanced Malware Detection Techniques
  • Digital Rights Management and Security
  • Biosensors and Analytical Detection

National Institute of Informatics
2016-2025

The University of Tokyo
2019-2025

The Graduate University for Advanced Studies, SOKENDAI
2015-2024

Nippon Soken (Japan)
2023

Tokyo University of Information Sciences
2021-2022

Research Organization of Information and Systems
2007-2019

Hitotsubashi University
2010-2019

National Institute for Japanese Language and Linguistics
2014

Ho Chi Minh City University of Science
2011

University of Electro-Communications
2004-2009

This paper presents a method to automatically and efficiently detect face tampering in videos, particularly focuses on two recent techniques used generate hyper-realistic forged videos: Deepfake Face2Face. Traditional image forensics are usually not well suited videos due the compression that strongly degrades data. Thus, this follows deep learning approach networks, both with low number of layers focus mesoscopic properties images. We evaluate those fast networks an existing dataset we have...

10.1109/wifs.2018.8630761 preprint EN 2018-12-01

Recent advances in media generation techniques have made it easier for attackers to create forged images and videos. State-of-the-art methods enable the real-time creation of a version single video obtained from social network. Although numerous been developed detecting videos, they are generally targeted at certain domains quickly become obsolete as new kinds attacks appear. The method introduced this paper uses capsule network detect various spoofs, replay using printed or recorded videos...

10.1109/icassp.2019.8682602 article EN ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2019-04-17

Detecting manipulated images and videos is an important topic in digital media forensics. Most detection methods use binary classification to determine the probability of a query being manipulated. Another locating regions (i.e., performing segmentation), which are mostly created by three commonly used attacks: removal, copy-move, splicing. We have designed convolutional neural network that uses multi-task learning approach simultaneously detect locate for each query. Information gained one...

10.1109/btas46853.2019.9185974 article EN 2019-09-01

This paper presents a deep-learning method for distinguishing computer generated graphics from real photographic images. The proposed uses Convolutional Neural Network (CNN) with custom pooling layer to optimize current best-performing algorithms feature extraction scheme. Local estimates of class probabilities are computed and aggregated predict the label whole picture. We evaluate our work on recent photo-realistic show that it outperforms state art methods both local full image classification.

10.1109/wifs.2017.8267647 preprint EN 2017-12-01

10.1016/j.ijmedinf.2010.10.001 article EN International Journal of Medical Informatics 2010-10-31

The revolution in computer hardware, especially graphics processing units and tensor units, has enabled significant advances artificial intelligence algorithms. In addition to their many beneficial applications daily life business, computer-generated/manipulated images videos can be used for malicious purposes that violate security systems, privacy, social trust. deepfake phenomenon its variations enable a normal user use his or her personal easily create fake of anybody from short real...

10.48550/arxiv.1910.12467 preprint EN other-oa arXiv (Cornell University) 2019-01-01

Although voice conversion (VC) algorithms have achieved remarkable success along with the development of machine learning, superior performance is still difficult to achieve when using nonparallel data. In this paper, we propose a cycle-consistent adversarial network (CycleGAN) for data-based VC training. A CycleGAN generative (GAN) originally developed unpaired image-to-image translation. subjective evaluation inter-gender demonstrated that proposed method significantly outperformed based...

10.1109/icassp.2018.8462342 preprint EN 2018-04-01

The social media revolution has produced a plethora of web services to which users can easily upload and share multimedia documents.Despite the popularity convenience such services, sharing inherently personal data, including speech raises obvious security privacy concerns.In particular, user's data may be acquired used with synthesis systems produce high-quality utterances reflect same speaker identity.These then attack verification systems.One solution mitigate these concerns involves...

10.21437/ssw.2019-28 article EN 2019-09-14

10.1109/access.2025.3530961 article EN cc-by IEEE Access 2025-01-01

Thanks to the growing availability of spoofing databases and rapid advances in using them, systems for detecting voice attacks are becoming more capable, error rates close zero being reached ASVspoof2015 database.However, speech synthesis conversion paradigms that not considered database appearing.Such examples include direct waveform modelling generative adversarial networks.We also need investigate feasibility training only low-quality found data.For purpose, we developed a networkbased...

10.21437/odyssey.2018-34 article EN 2018-06-06

The proliferation of deepfake media is raising concerns among the public and relevant authorities. It has become essential to develop countermeasures against forged faces in social media. This paper presents a comprehensive study on two new countermeasure tasks: multi-face forgery detection segmentation in-the-wild. Localizing multiple human unrestricted natural scenes far more challenging than traditional recognition task. To promote these tasks, we have created first large-scale dataset...

10.1109/iccv48922.2021.00996 article EN 2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021-10-01

Deep neural networks are vulnerable to adversarial examples (AEs), which have transferability: AEs generated for the source model can mislead another (target) model's predictions. However, transferability has not been understood in terms of class target predictions were misled (i.e., class-aware transferability). In this paper, we differentiate cases a predicts same wrong as ("same mistake") or different ("different analyze and provide an explanation mechanism. We find that (1) tend cause...

10.1109/wacv56688.2023.00141 article EN 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2023-01-01

Discriminating between computer-generated images (CGIs) and photographic (PIs) is not a new problem in digital image forensics. However, with advances rendering techniques supported by strong hardware generative adversarial networks, CGIs are becoming indistinguishable from PIs both human computer perception. This means that malicious actors can use for spoofing facial authentication systems, impersonating other people, creating fake news to be spread on social networks. The methods...

10.1145/3230833.3230863 article EN Proceedings of the 17th International Conference on Availability, Reliability and Security 2018-08-13

Deep-learning-based technologies such as deepfakes ones have been attracting widespread attention in both society and academia, particularly used to synthesize forged face images. These automatic professional-skill-free manipulation can be replace the an original image or video with any target object while maintaining expression demeanor. Since human faces are closely related identity characteristics, maliciously disseminated manipulated videos could trigger a crisis of public trust media...

10.1109/wacvw58289.2023.00070 article EN 2023-01-01

In the era of large AI models, intricate architectures and vast parameter sets models such as language (LLMs) present significant challenges for effective quality management (AIQM). This paper investigates assurance a specific LLM-based product: ChatGPT-based sentiment analysis. The study focuses on stability issues, examining both operation robustness ChatGPT’s underlying large-scale model. Through experimental analysis benchmark datasets analysis, findings highlight analysis’s...

10.3390/electronics13245043 article EN Electronics 2024-12-22

Sosuke Nishikawa, Ryokan Ri, Ikuya Yamada, Yoshimasa Tsuruoka, Isao Echizen. Proceedings of the 2022 Conference North American Chapter Association for Computational Linguistics: Human Language Technologies. 2022.

10.18653/v1/2022.naacl-main.284 article EN cc-by Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies 2022-01-01

The reliability of remote identity-proofing systems (i.e., electronic Know Your Customer, or eKYC, systems) is challenged by the development deepfake generation tools, which can be used to create fake videos that are difficult detect using existing detection models and indistinguishable facial recognition systems. This poses a serious threat eKYC danger individuals' personal information property. Existing datasets not particularly appropriate for developing evaluating systems, require...

10.1109/access.2024.3369187 article EN cc-by-nc-nd IEEE Access 2024-01-01

Face authentication is now widely used, especially on mobile devices, rather than using a personal identification number or an unlock pattern, due to its convenience. It has thus become tempting target for attackers presentation attack. Traditional attacks use facial images videos of the victim. Previous work proven existence master faces, i.e., faces that match multiple enrolled templates in face recognition systems, and their extends ability attacks. In this paper, we report extensive...

10.1109/tbiom.2022.3166206 article EN cc-by IEEE Transactions on Biometrics Behavior and Identity Science 2022-04-15

Deepfakes pose an evolving cybersecurity threat that calls for the development of automated countermeasures. While considerable forensic research has been devoted to detection and localisation deepfakes, solutions 'fake real' reversal are yet be developed. In this study, we introduce concept cyber vaccination conferring immunity deepfakes. other words, aim impart self-healing ability face media so original content is possible recovered after manipulated by AI-based deepfake technology....

10.1109/access.2023.3311461 article EN cc-by IEEE Access 2023-01-01

Steganography, the art of information hiding, has continually evolved across visual, auditory and linguistic domains, adapting to ceaseless interplay between steganographic concealment steganalytic revelation. This study seeks extend horizons what constitutes a viable medium by introducing paradigm in robotic motion control. Based on observation robot's inherent sensitivity changes its environment, we propose methodology encode messages as environmental stimuli influencing motions agent...

10.48550/arxiv.2501.04541 preprint EN arXiv (Cornell University) 2025-01-08
Coming Soon ...