Javier Parapar

ORCID: 0000-0002-5997-8252
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Topic Modeling
  • Mental Health via Writing
  • Recommender Systems and Techniques
  • Information Retrieval and Search Behavior
  • Digital Mental Health Interventions
  • Natural Language Processing Techniques
  • Mental Health Research Topics
  • Text and Document Classification Technologies
  • Advanced Text Analysis Techniques
  • Sentiment Analysis and Opinion Mining
  • Advanced Bandit Algorithms Research
  • Web Data Mining and Analysis
  • Advanced Image and Video Retrieval Techniques
  • Hate Speech and Cyberbullying Detection
  • Image Retrieval and Classification Techniques
  • Spam and Phishing Detection
  • Semantic Web and Ontologies
  • Advanced Clustering Algorithms Research
  • Algorithms and Data Compression
  • Complex Network Analysis Techniques
  • Mobile Crowdsensing and Crowdsourcing
  • Machine Learning and Algorithms
  • Authorship Attribution and Profiling
  • Data Management and Algorithms
  • Handwritten Text Recognition Techniques

Universidade da Coruña
2015-2024

Fundación Centro Tecnológico de la Información y la Comunicación
2024

Colciencias
2021

Universidade de Santiago de Compostela
2018-2019

Università della Svizzera italiana
2018-2019

Laboratoire d'Informatique de Paris-Nord
2007

Spectral clustering techniques have become one of the most popular algorithms, mainly because their simplicity and effectiveness. In this work, we make use these techniques, Normalised Cut, in order to derive a cluster-based collaborative filtering algorithm which outperforms other standard state-of-the-art terms ranking precision. We frame technique as method for neighbour selection, show its effectiveness when compared with methods. Furthermore, performance our could be improved if...

10.1145/2365952.2365997 article EN 2012-09-09

The evaluation of Recommender Systems is still an open issue in the field. Despite its limitations, offline usually constitutes first step assessing recommendation methods due to reduced costs and high reproducibility. Selecting appropriate metric a critical ranking accuracy attracts most attention nowadays. In this paper, we aim shed light on advantages different metrics which were previously used Information Retrieval are now for top-N recommenders. We propose methodologies comparing...

10.1145/3240323.3240347 article EN 2018-09-27

10.1016/j.engappai.2019.06.020 article EN Engineering Applications of Artificial Intelligence 2019-07-05

Creating test collections for offline retrieval evaluation requires human effort to judge documents' relevance. This expensive activity motivated much work in developing methods constructing benchmarks with fewer assessment costs. In this respect, adjudication actively decide both which documents and the order experts review them, better exploit budget or lower it. Researchers evaluate quality of those by measuring correlation between known gold ranking systems under full collection observed...

10.1145/3583780.3614916 preprint EN cc-by 2023-10-21

Null Hypothesis Significance Testing is the \textit{de facto} tool for assessing effectiveness differences between Information Retrieval systems. Researchers use statistical tests to check whether those will generalise online settings or are just due samples observed in laboratory. Much work has been devoted studying which test most reliable when comparing a pair of systems, but IR real-world experiments involve more than two. In multiple comparisons scenario, testing several systems...

10.48550/arxiv.2501.03930 preprint EN arXiv (Cornell University) 2025-01-07

Evaluation is crucial in Information Retrieval. The Cranfield paradigm allows reproducible system evaluation by fostering the construction of standard and reusable benchmarks. Each benchmark or test collection comprises a set queries, documents relevance judgements. Relevance judgements are often done humans thus expensive to obtain. Consequently, customarily incomplete. Only subset collection, pool, judged for relevance. In TREC-like campaigns, pool formed top retrieved supplied systems...

10.1145/2851613.2851692 article EN 2016-04-04
Coming Soon ...