Giovanni Maria Merola

ORCID: 0000-0003-2539-9225
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Spectroscopy and Chemometric Analyses
  • Face and Expression Recognition
  • Machine Learning and Data Classification
  • Advanced Statistical Methods and Models
  • Advanced Multi-Objective Optimization Algorithms
  • Fault Detection and Control Systems
  • Control Systems and Identification
  • Blind Source Separation Techniques
  • Sparse and Compressive Sensing Techniques
  • Digital Marketing and Social Media
  • Sensory Analysis and Statistical Methods
  • Impact of Technology on Adolescents
  • Advanced Statistical Process Monitoring
  • Gene expression and cancer classification
  • Income, Poverty, and Inequality
  • Consumer Behavior in Brand Consumption and Identification
  • Bayesian Modeling and Causal Inference
  • Education and Technology Integration
  • Privacy-Preserving Technologies in Data
  • Statistical Distribution Estimation and Applications
  • Collaborative Teaching and Inclusion
  • Technology Adoption and User Behaviour
  • Manufacturing Process and Optimization
  • Media Influence and Health
  • Rough Sets and Fuzzy Logic

MIT World Peace University
2023

Xi’an Jiaotong-Liverpool University
2017-2021

RMIT Vietnam
2015-2016

Office for National Statistics
2004

University of Waterloo
2001

10.1016/j.csda.2003.11.021 article EN Computational Statistics & Data Analysis 2004-01-02

Due to the penetration of smartphones and associated mobile devices, gaming has become a ubiquitous industry worldwide. Players now have access games at all times. Extending previous research Uses Gratifications approach this paper presents an alternative conceptual model that can offer explanations towards understanding why players play game they most frequently.

10.4018/ijebr.2017100103 article EN International Journal of E-Business Research 2017-08-18

Abstract Asset indices have been used since the late 1990s to measure wealth in developing countries. We extend standard methodology for estimating asset using principal component analysis two ways: by introducing constraints that force increasing value as number of assets owned increases, and sparse with a few key assets. This is achieved combining categorical analysis. also apply this estimation per capita level indices. Using household survey data from northwest Vietnam northeast Laos, we...

10.1111/rode.12568 article EN Review of Development Economics 2018-11-09

Summary Sparse principal components analysis (SPCA) is a technique for finding with small number of non‐zero loadings. Our contribution to this methodology twofold. First we derive the sparse solutions that minimise least squares criterion subject sparsity requirements. Second, recognising not only requirement achieving simplicity, suggest backward elimination algorithm computes large This can be run without specifying loadings in advance. It also possible impose minimum amount variance...

10.1111/anzs.12128 article EN Australian & New Zealand Journal of Statistics 2015-09-01

10.1016/j.jmva.2019.04.001 article EN publisher-specific-oa Journal of Multivariate Analysis 2019-04-12

The purpose of this study was to create a conceptual framework and collect some pilot data in order underpin future research on how the Vietnamese use Facebook their day-to-day lives. A number key points were observed study, which informed framework. Firstly, there is paucity topic, that users Vietnam (population 90 million) rank as heaviest consumers world, cultural traditions values need be acknowledged given these differences when compared other nations might influence use. Given studies...

10.4236/jss.2016.411006 article EN Open Journal of Social Sciences 2016-01-01

Abstract The authors consider dimensionality reduction methods used for prediction, such as reduced rank regression, principal component regression and partial least squares. They show how it is possible to obtain intermediate solutions by estimating simultaneously the latent variables predictors responses. a continuum of that goes from via maximum likelihood squares estimation. Different are compared using simulated real data.

10.2307/3316072 article EN Canadian Journal of Statistics 2001-06-01

Zero Re-Burn-In methodology presented in this paper is based on statistical analysis of the historical manufacturing data burn-in (BI) and re-burn-in (REBI) tests employed semiconductor devices manufacturing. The goal to reduce (or, if possible, eliminate) REBI test so lower associated cost time while preserving required low failure rate manufactured devices. processing production sets are performed employing JMP software. re-search has led development a logistic regression model capable...

10.1109/i2mtc.2017.7969957 article EN 2022 IEEE International Instrumentation and Measurement Technology Conference (I2MTC) 2017-05-01

We propose an algorithmic framework for computing sparse components from rotated principal components. This methodology, called SIMPCA, is useful to replace the unreliable practice of ignoring small coefficients when interpreting them. The algorithm computes genuinely by projecting onto subsets variables. so simplified are highly correlated with corresponding By choosing different simplification strategies solutions can be obtained which used compare alternative interpretations give some...

10.1080/02664763.2019.1676404 article EN Journal of Applied Statistics 2019-10-12

Sparse Principal Components Analysis aims to find principal components with few non-zero loadings. We derive such sparse solutions by adding a genuine sparsity requirement the original (PCA) objective function. This approach differs from others because it preserves PCA's optimality: \uns\ of and Least Squares approximation data. To identify best subset loadings we propose Branch-and-Bound search an iterative elimination algorithm. last algorithm finds large can be run without specifying...

10.48550/arxiv.1406.1381 preprint EN other-oa arXiv (Cornell University) 2014-01-01

This study compares final grade results across two different cohorts of accounting students (one using a traditional lecture model and the other inter-teaching – an innovative pedagogy). Boyce Hineline (2002) designed to engage in their learning enhance academic performance. Accounting courses historically have had record high failure rates at offshore campus Australian University, Vietnam. Final comparisons were made between exposed those taught under lecture-tutorial model. The treatments...

10.37074/jalt.2019.2.2.4 article EN Journal of Applied Learning & Teaching 2019-12-30

Semiconductor device manufacturing process involves a large number of sophisticated production steps that have to be controlled precisely so achieve and maintain the required yield product quality specifications. Therefore, it is critical employ extensive data collection from numerous process-associated sensors. The are then utilized by control system (PCS) monitor process. This paper presents real-world study performed at one major semiconductor manufacturers where massive line sensors were...

10.1109/sas48726.2020.9220086 article EN 2020-03-01

We propose a new sparse principal component analysis (SPCA) method in which the solutions are obtained by projecting full cardinality components onto subsets of variables. The resulting guaranteed to explain given proportion variance. computation these is very efficient. proposed compares well with optimal least squares components. show that other SPCA methods fail identify best approximations and less variance than our solutions. illustrate compare real dataset containing socioeconomic data...

10.48550/arxiv.1612.00939 preprint EN other-oa arXiv (Cornell University) 2016-01-01

Hyperparameter tuning plays a crucial role in optimizing the performance of predictive learners. Cross--validation (CV) is widely adopted technique for estimating error different hyperparameter settings. Repeated cross-validation (RCV) has been commonly employed to reduce variability CV errors. In this paper, we introduce novel approach called blocked (BCV), where repetitions are with respect both partition and random behavior learner. Theoretical analysis empirical experiments demonstrate...

10.48550/arxiv.2306.06591 preprint EN cc-by arXiv (Cornell University) 2023-01-01

Abstract Hyperparameter tuning plays a crucial role in optimizing the performance of predictive learners. Cross--validation (CV) is widely adopted technique for estimating error different hyperparameter settings. Repeated cross--validation (RCV) has been commonly employed to reduce variability CV errors. In this paper, we introduce novel approach called blocked (BCV), where repetitions are with respect both partition and random behavior learner. Theoretical analysis empirical experiments...

10.21203/rs.3.rs-3221138/v1 preprint EN cc-by Research Square (Research Square) 2023-08-02

The topic of this tutorial is Least Squares Sparse Principal Components Analysis (LS SPCA) which a simple method for computing approximated are combinations only few the observed variables. Analogously to Components, these components uncorrelated and sequentially best approximate dataset. derivation LS SPCA intuitive anyone familiar with linear regression. Since based on different optimality from other methods does not suffer their serious drawbacks. I will demonstrate two datasets how...

10.48550/arxiv.2105.13581 preprint EN cc-by-nc-nd arXiv (Cornell University) 2021-01-01
Coming Soon ...