- Computer Graphics and Visualization Techniques
- Advanced Steganography and Watermarking Techniques
- Advanced Vision and Imaging
- Domain Adaptation and Few-Shot Learning
- Generative Adversarial Networks and Image Synthesis
- Image Retrieval and Classification Techniques
- Human Pose and Action Recognition
- Advanced Image and Video Retrieval Techniques
- Advanced Image Processing Techniques
- Multimodal Machine Learning Applications
- Digital Media Forensic Detection
- Video Surveillance and Tracking Methods
- Bayesian Methods and Mixture Models
- Chaos-based Image/Signal Encryption
- Medical Image Segmentation Techniques
- Visual Attention and Saliency Detection
- Gaussian Processes and Bayesian Inference
- Anomaly Detection Techniques and Applications
- Neural Networks and Applications
- Image and Signal Denoising Methods
- 3D Surveying and Cultural Heritage
- Robotics and Sensor-Based Localization
- 3D Shape Modeling and Analysis
- Advanced Data Compression Techniques
- Image Processing and 3D Reconstruction
University of York
2016-2025
Mohamed bin Zayed University of Artificial Intelligence
2024
Massachusetts Institute of Technology
2021
Université de Montpellier
2006
York University
2006
Tampere University
1994-2005
Aristotle University of Thessaloniki
1995-2002
Watermarking algorithms are used for image copyright protection. The proposed select certain blocks in the based on a Gaussian network classifier. pixel values of selected modified such that their discrete cosine transform (DCT) coefficients fulfil constraint imposed by watermark code. Two different constraints considered. first approach consists embedding linear among DCT and second one defines circular detection regions domain. A rule generating parameters distinct watermarks is provided....
Radial basis functions (RBFs) consist of a two-layer neural network, where each hidden unit implements kernel function. Each is associated with an activation region from the input space and its output fed to unit. In order find parameters network which embeds this structure we take into consideration two different statistical approaches. The first approach uses classical estimation in learning stage it based on vector quantization algorithm second-order statistics extension. After...
This paper proposes a joint maximum likelihood and Bayesian methodology for estimating Gaussian mixture models. In inference, the distributions of parameters are modeled, characterized by hyperparameters. case mixtures, considered as mean, Wishart covariance, Dirichlet mixing probability. The learning task consists hyperparameters characterizing these distributions. integration in parameter space is decoupled using an unsupervised variational entitled expectation-maximization (VEM)....
The use of clustering algorithms for decision-level data fusion is proposed. Person authentication results coming from several modalities (e.g., still image, speech), are combined by using fuzzy k-means (FKM) and vector quantization (FVQ) algorithms, a median radial basis function (MRBF) network. quality measure the used fuzzification. Two modifications FKM FVQ based on distance definition, proposed to handle utilize measure. Simulations show that have better performance compared classical...
A unique cognitive capability of humans consists in their ability to acquire new knowledge and skills from a sequence experiences. Meanwhile, artificial intelligence systems are good at learning only the last given task without being able remember databases learnt past. We propose novel lifelong methodology by employing Teacher-Student network framework. While Student module is trained with database, Teacher would remind about information The Teacher, implemented Generative Adversarial...
Human brains can continually acquire and learn new skills knowledge over time from a dynamically changing environment without forgetting previously learnt information. Such capacity selectively transfer some important recently seen information to the persistent regions of brain. Inspired by this intuition, we propose memory-based approach for image reconstruction generation in continual learning, consisting temporary evolving memory, with two different storage strategies, corresponding...
A new methodology for fingerprinting and watermarking three-dimensional (3-D) graphical objects is proposed in this paper. The 3-D are described by means of polygonal meshes. information to be embedded provided as a binary code. has two stages: embedding detecting the that been given media. local geometrical perturbations while maintaining connectivity. neighborhood localized measure used selecting appropriate vertices watermarking. study undertaken order verify suitability from regions...
This paper proposes a new approach to 3D watermarking by ensuring the optimal preservation of mesh surfaces. A surface function metric is defined consisting distance vertex displaced original surface, watermarked object as well actual displacement. The proposed method statistical, blind, and robust. Minimal distortion according enforced during statistical watermark embedding stage using Levenberg-Marquardt optimization method. study code crypto-security provided for methodology. According...
Variational autoencoders (VAEs) are one of the most popular unsupervised generative models that rely on learning latent representations data. In this article, we extend classical concept Gaussian mixtures into deep variational framework by proposing a mixture VAEs (MVAE). Each component in MVAE model is implemented encoder and has an associated subdecoder. The separation between spaces modeled different encoders enforced using d -variable Hilbert-Schmidt independence criterion (dHSIC). would...
Kernel density estimation is a nonparametric procedure for probability modeling, which has found several applications in various fields. The smoothness and modeling ability of the functional approximation are controlled by kernel bandwidth. In this paper, we describe Bayesian method finding bandwidth from given data set. proposed applied three different computational-intelligence methods that rely on estimation: 1) scale space; 2) mean shift; 3) quantum clustering. third novel approach...
In this article, we propose an end-to-end lifelong learning mixture of experts. Each expert is implemented by a variational autoencoder (VAE). The experts in the system are jointly trained maximizing individual component evidence lower bounds (MELBO) on log-likelihood given training samples. mixing coefficients model control contributions each global representation. These sampled from Dirichlet distribution whose parameters determined through nonparametric estimation during learning. can...
Learning from non-stationary data streams, also called Task-Free Continual (TFCL) remains challenging due to the absence of explicit task information. Although recently some methods have been proposed for TFCL, they lack theoretical guarantees. Moreover, forgetting analysis during TFCL was not studied theoretically before. This paper develops a new framework which provides generalization bounds based on discrepancy distance between visited samples and entire information made available...
Various approaches have been proposed for simultaneous optical flow estimation and segmentation in image sequences. In this study, the moving scene is decomposed into different regions with respect to their motion, by means of a pattern recognition scheme. The inputs scheme are feature vectors representing still motion information. Each class corresponds object. classifier employed median radial basis function (MRBF) neural network. An error criterion derived from probability theory...
This paper describes a new statistical approach for watermarking mesh representations of 3-D graphical objects. A robust digital method has to mitigate among the requirements watermark invisibility, robustness, embedding capacity and key security. The proposed employs propagation distance metric procedure called fast marching (FMM), which defines regions equal geodesic width calculated with respect reference location on mesh. Each these is used single bit. performed by changing normalized...
Memorability of an image is a characteristic determined by the human observers' ability to remember images they have seen. Yet recent work on memorability defines it as intrinsic property that can be obtained independent observer. The current study aims enhance our understanding and prediction memorability, improving upon existing approaches incorporating properties cumulative annotations. We propose new concept called Visual Memory Schema (VMS) referring organization components observers...
Copyright protection of graphical objects and models is important for protecting author rights in animation, multimedia, computer-aided design (CAD), virtual reality, medical imaging, etc. We suggest a blind watermarking algorithm 3D objects. A string bits, generated according to key, embedded the geometrical structure object by changing locations certain vertices. The criterion choose these vertices ensures minimal visibility distortions watermarked object. bit encoding 1 associated with...
This paper develops a maximum posteriori (MAP) probability estimation framework for shape-from-shading (SFS) from synthetic aperture radar (SAR) images. The aim is to use this method reconstruct surface topography single image of relatively complex terrain. Our MAP makes explicit how the recovery local orientation depends on whereabouts terrain edge features and available reflectance information. To apply resulting process real world data, we require probabilistic models appearance...
While 3-D steganography and digital watermarking represent methods for embedding information into objects, steganalysis aims to find the hidden information. Previous research studies have shown that by estimating parameters modeling statistics of features feeding them a classifier we can identify whether object carries secret For training steganalyzer, such are extracted from cover stego pairs, representing original objects those carrying However, in practical applications, steganalyzer...
Content-based image retrieval (CBIR) aims to provide the most similar images a given query. Feature extraction plays an essential role in performance within CBIR pipeline. Current studies would either uniformly extract feature information from input and use it directly or employ some trainable spatial weighting module which is then used for similarity comparison between pairs of query candidate matching images. These modules are normally non-sensitive only based on knowledge learned during...