Leonardo Barcellona

ORCID: 0000-0003-4281-0610
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Robot Manipulation and Learning
  • Human Pose and Action Recognition
  • Advanced Neural Network Applications
  • Robotics and Automated Systems
  • Hand Gesture Recognition Systems
  • Advanced Manufacturing and Logistics Optimization
  • Multimodal Machine Learning Applications
  • 3D Shape Modeling and Analysis
  • Reinforcement Learning in Robotics
  • Digital Imaging for Blood Diseases
  • Advanced Image and Video Retrieval Techniques
  • AI in cancer detection
  • Remote Sensing and Land Use
  • Advanced Vision and Imaging
  • Radiomics and Machine Learning in Medical Imaging
  • Remote Sensing in Agriculture
  • Neural Networks and Applications
  • Robotics and Sensor-Based Localization
  • Remote-Sensing Image Classification
  • Video Surveillance and Tracking Methods
  • Machine Learning and Data Classification
  • Image Retrieval and Classification Techniques
  • Face and Expression Recognition
  • Brain Tumor Detection and Classification

University of Padua
2021-2024

Polytechnic University of Turin
2023

Recognizing human actions is crucial for an effective and safe collaboration between humans robots. For example, in a collaborative assembly task, workers can use gestures to communicate with the robot, robot recognized anticipate next steps process, leading improved safety productivity. In this work, we propose general framework action recognition based on 3D pose estimation ensemble techniques, which allows recognize both body hand gestures. The relies OpenPose 2D lifting methods estimate...

10.1016/j.robot.2023.104523 article EN cc-by Robotics and Autonomous Systems 2023-09-16

For robust classification, selecting a proper classifier is of primary importance. However, the best classifiers depends on problem, as some work better at tasks than others. Despite many results collected in literature, support vector machine (SVM) remains leading adopted solution domains, thanks to its ease use. In this paper, we propose new method based convolutional neural networks (CNNs) an alternative SVM. CNNs are specialized processing data grid-like topology that usually represents...

10.3390/analytics2030037 article EN cc-by Analytics 2023-09-04

This paper presents a study on an automated system for image classification, which is based the fusion of various deep learning methods. The explores how to create ensemble different Convolutional Neural Network (CNN) models and transformer topologies that are fine-tuned several datasets leverage their diversity. research question addressed in this work whether optimization algorithms can help developing robust efficient machine systems be used domains classification purposes. To do that, we...

10.1109/access.2023.3330442 article EN cc-by IEEE Access 2023-01-01

Robot grasping has been widely studied in the last decade. Recently, Deep Learning made possible to achieve remarkable results grasp pose estimation, using depth and RGB images. However, only few works consider choice of object grasp. Moreover, they require a huge amount data for generalizing unseen categories. For this reason, we introduce Few-shot Semantic Grasping task where objective is inferring correct given five labelled images target object. We propose new deep learning architecture...

10.1109/icra48891.2023.10160618 article EN 2023-05-29

The introduction of deep learning caused a significant breakthrough in digital pathology. Thanks to its capability mining hidden data patterns digitised histological slides resolve diagnostic tasks and extract prognostic predictive information. However, the high performance achieved classification depends on availability large datasets, whose collection preprocessing are still time-consuming processes. Therefore, strategies make these steps more efficient worth investigation. This work...

10.1016/j.jpi.2023.100356 article EN cc-by Journal of Pathology Informatics 2023-12-08

The ability of a robot to pick an object, known as grasping, is crucial for several applications, such assembly or sorting. In tasks, selecting the right target essential inferring correct configuration gripper. A common solution this problem relies on semantic segmentation models, which often show poor generalization unseen objects and require considerable time massive data be trained. To reduce need large datasets, some grasping pipelines exploit few-shot are capable recognizing new...

10.48550/arxiv.2404.12717 preprint EN arXiv (Cornell University) 2024-04-19

Robotic waste sorting poses significant challenges in both perception and manipulation, given the extreme variability of objects that should be recognized on a cluttered conveyor belt. While deep learning has proven effective solving complex tasks, necessity for extensive data collection labeling limits its applicability real-world scenarios like sorting. To tackle this issue, we introduce augmentation method based novel GAN architecture called wasteGAN. The proposed allows to increase...

10.48550/arxiv.2409.16999 preprint EN arXiv (Cornell University) 2024-09-25

A world model provides an agent with a representation of its environment, enabling it to predict the causal consequences actions. Current models typically cannot directly and explicitly imitate actual environment in front robot, often resulting unrealistic behaviors hallucinations that make them unsuitable for real-world applications. In this paper, we introduce new paradigm constructing are explicit representations real dynamics. By integrating cutting-edge advances real-time photorealism...

10.48550/arxiv.2412.14957 preprint EN arXiv (Cornell University) 2024-12-19

In human-robot collaboration, perception plays a major role in enabling the robot to understand surrounding environment and position of humans inside working area, which represents key element for an effective safe collaboration. Human pose estimators based on skeletal models are among most popular approaches monitor around robot, but they do not take into account information such as body volume, needed by collision avoidance. this paper, we propose novel 3D human representation derived from...

10.1109/icar53236.2021.9659456 article EN 2021 20th International Conference on Advanced Robotics (ICAR) 2021-12-06
Coming Soon ...