- Advanced Data Compression Techniques
- Video Coding and Compression Technologies
- Advanced Image and Video Retrieval Techniques
- Image and Signal Denoising Methods
- Image Retrieval and Classification Techniques
- Digital Filter Design and Implementation
- Image and Video Quality Assessment
- Lung Cancer Diagnosis and Treatment
- Advanced Vision and Imaging
- Topic Modeling
- Embedded Systems Design Techniques
- Medical Image Segmentation Techniques
- Adversarial Robustness in Machine Learning
- Advanced Radiotherapy Techniques
- Misinformation and Its Impacts
- Cultural Insights and Digital Impacts
- Medical Imaging Techniques and Applications
- Education, sociology, and vocational training
- Computational Physics and Python Applications
- Organic Food and Agriculture
- 3D Modeling in Geospatial Applications
- Spam and Phishing Detection
- Forest Insect Ecology and Management
- Digital Image Processing Techniques
- Video Analysis and Summarization
UCLouvain
2008-2024
Fraunhofer Institute for Integrated Circuits
2021
Universitat Politècnica de Catalunya
2006
<para xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <?Pub _touchup WordspNominal="1.6" WordspMinimum="1.3" WordspMaximum="2.9"?>The image compression standard JPEG 2000 proposes a large set of features that is useful for today's multimedia applications. Unfortunately, it much more complex than older standards. Real-time applications, such as digital cinema, require specific, secure, and scalable hardware implementation. In this paper,...
Joint Photographic Experts Group (JPEG) XS is a new International Standard from the JPEG Committee (formally known as ISO/International Electrotechnical Commission (IEC) JTC1/SC29/WG1). It defines an interoperable, visually lossless low-latency lightweight image coding that can be used for mezzanine compression within any AV market. Among targeted use cases, one cite video transport over professional links (serial digital interface (SDI), internet protocol (IP), and Ethernet), real-time...
While traditional image compression algorithms take a full three-component color representation of an as input, capturing such images is done in many applications with Bayer CFA pattern sensors that provide only single information per sensor element and position. In order to avoid additional complexity at the encoder side, can be compressed directly without prior conversion image. this paper, we describe recent activity JPEG committee (ISO SC 29 WG 1) develop algorithm framework XS. It turns...
JPEG XS is an upcoming standard from the Committee (formally known as ISO/IEC SC29 WG1). It aims to provide interoperable visually lossless low-latency lightweight codec for a wide range of applications including mezzanine compression in broadcast and Pro-AV markets. This requires optimal support implementation technologies such FPGAs, CPUs GPUs. Targeted use cases are professional video links, IP transport, Ethernet real-time storage, memory buffers, omnidirectional capture rendering. In...
This paper considers the issues of scheduling and caching JPEG2000 data in client/server interactive browsing applications, under memory channel bandwidth constraints. It analyzes how conveyed have to be selected at server managed within client cache so as maximize reactivity application. Formally, render dynamic nature session, we assume existence a reaction model that defines when user launches novel command function image quality displayed client. As main outcome, our work demonstrates...
JPEG XS is a new standard for low-latency and low-complexity coding designed by the committee. Unlike former developments, optimal rate distortion performance only secondary goal; focus of JPEG~XS to enable cost-efficient, easy parallelize implementations suitable FPGAs or GPUs. In this article, we shed some light on entropy back-end introduce modifications stage currently under discussion that improve objective subjective quality compressed images without compromising parallelism original algorithm.
More and higher quality ultrahigh-definition (UHD) content is arriving in the production environment, requiring additional bandwidth for data transmission exchange. In parallel, a more flexible infrastructure based on well-known Internet Protocol (IP) stack very desirable. Adding mezzanine compression workflow can reduce necessary capacities or even enable usage of existing designed previous HD lines resolution content. A low complexity codec with ultralow latency, preserving highest...
Today, many existing types of video transmission and storage infrastructure are not able to handle UHD uncompressed in real time. To reduce the required bit rates, a low-latency lightweight compression scheme is needed. this end, several standardization efforts, such as Display Stream Compression, Advanced DSC, JPEG XS, currently being made. Focusing on screen content use cases, paper provides comparison codecs suited for field application. In particular, performance VC-2, 2000 (in...
In this paper, we address the issues of analyzing and classifying JPEG 2000 code-streams. An original representation, called integral volume, is first proposed to compute local image features progressively from compressed code-stream, on any spatial area, regardless code-blocks borders. Then, a classifier presented that uses volumes learn an ensemble randomized trees. Several classification tasks are performed various databases results in same range as ones obtained literature with...
The JPEG committee (formally, ISO SC29 WG1) is currently standardizing a lightweight mezzanine codec for video over IP transport under the name XS. A particular challenging design constraint of this multi-generation robustness, that necessity to minimize error built-up multiple re-compression cycles. In paper, we discuss sources such errors, how they are avoided in XS and compare robustness with other codecs.
Accurate descriptions of feeding habits are essential to understanding the evolution dietary preferences and high levels diversification within Chrysomelidae. Both primary observations summaries suggest that cassidine beetle tribe, Cephaloleiini, is a species-rich group specialists on monocot hosts. However, accurate host ranges poorly defined for most hispine species. To better document occurrence feeding, we censused Cephaloleiini associated with rolled leaves five species Marantaceae six...
The JPEG committee (Joint Photographic Experts Group, formally known as ISO/IEC SC29 WG1) is currently in the process of standardizing XS, a new interoperable solution for low-latency, lightweight and visually lossless compression image video. This codec intended to be used applications where content would usually transmitted or stored uncompressed form such live production, display links, virtual augmented reality, self driving vehicles frame buffers. It achieves bandwidth power reduction...
The JPEG committee (formally, ISO/IEC SC 29 WG 01) is currently investigating a new work item on near lossless low complexity coding for IP streaming of moving images. This article discusses the requirements and use cases this item, gives some insight into anchors that are used purpose standardization, provides short update current proposals reached committee.
With the emergence of UHD video, reference frame buffers (FBs) inside HEVC-like encoder and decoder chips have to sustain huge bandwidth. The power consumption due accesses these off-chip memories accounts for a significant share codec's total consumption. This paper describes JPEG XS-based buffer compression solution intended decrease FB's bandwidth, making HEVC more suitable use in power-aware applications. As opposed previous works, our compresses large picture areas (ranging from CTU...
The image compression standard JPEG2000 proposes a large set of features, useful for today's multimedia applications. Unfortunately, its complexity is greater than older standards. A hardware implementation brings solution to this real-time applications, such as Digital Cinema. In paper, decoding scheme proposed with two main characteristics. First, the complete takes place in an FPGA without accessing any external memory, allowing integration secured system. Secondly, customizable level...
Remote access to large scale images arouses a growing interest in fields such as medical imagery or remote sensing. This raises the need for algorithms guaranteeing navigation smoothness while minimizing network resources used. In this paper, we present model taking advantage of JPEG 2000 scalability combined with prefetching policy. The uses last user action efficiently manage cache and prefetch most probable data be used next. Three different configurations are considered. each case,...
With the emergence of Ultra-High Definition video, reference frame buffers (FBs) inside HEVC-like encoders and decoders have to sustain huge bandwidth. The power consumed by these external memory accesses accounts for a significant share codec's total consumption. This paper describes solution significantly decrease FB's bandwidth, making HEVC encoder more suitable use in power-aware applications. proposed prototype consists integrating an embedded lightweight, low-latency visually lossless...
In this paper, we study the exploitation of language generation models for disinformation purposes from two viewpoints. Quantitatively, argue that hardly deal with domain adaptation (i.e., ability to generate text on topics are not part a training database, as typically required news). For purpose, show both simple machine learning and manual detection can spot machine-generated news in practically-relevant context. Qualitatively, put forward differences between these automatic processes,...
This article presents the results of an annotation experiment involving 36 participants tasked with rating subjectivity 150 excerpts from Belgian French press articles and identifying linguistic indicators in excerpts. We first explore inter-annotator agreement correlations between variables associated articles, then perform qualitative analysis a sample annotated texts. introduce “textual heat maps”, convenient method to visualize token-level annotations. The study reveals that perception...
Large language models (LLMs) perform very well in several natural processing tasks but raise explainability challenges. In this paper, we examine the effect of random elements training LLMs on their predictions. We do so a task opinionated journalistic text classification French. Using fine-tuned CamemBERT model and an explanation method based relevance propagation, find that with different seeds produces similar accuracy variable explanations. therefore claim characterizing explanations'...