- Hand Gesture Recognition Systems
- Robot Manipulation and Learning
- Human Pose and Action Recognition
- Robotics and Automated Systems
- Advanced Vision and Imaging
- Insect and Arachnid Ecology and Behavior
- Horticultural and Viticultural Research
- Insect and Pesticide Research
- Advanced Optical Sensing Technologies
- 3D Surveying and Cultural Heritage
- Plant and animal studies
- Teleoperation and Haptic Systems
- Irrigation Practices and Water Management
- Ocular and Laser Science Research
- COVID-19 diagnosis using AI
- Balance, Gait, and Falls Prevention
- Fermentation and Sensory Analysis
- Food Supply Chain Traceability
- Social Robot Interaction and HRI
- Prosthetics and Rehabilitation Robotics
- Spinal Cord Injury Research
- Muscle activation and electromyography studies
- Context-Aware Activity Recognition Systems
- Stroke Rehabilitation and Recovery
- Leaf Properties and Growth Measurement
University of Brescia
2018-2025
Institute of Electrical and Electronics Engineers
2019
University of Gävle
2019
This paper is a first step towards smart hand gesture recognition set up for Collaborative Robots using Faster R-CNN Object Detector to find the accurate position of hands in RGB images. In this work, defined as combination two hands, where one an anchor and other codes command robot. Other spatial requirements are used improve performances model filter out incorrect predictions made by detector. As step, we only four gestures.
An open question for the biomechanical research community is accurate estimation of volume and mass each body segment human body, especially when indirect measurements are based on modeling. Traditional methods involve adoption anthropometric tables, which describe only average shape, or manual measurements, time-consuming depend operator. We propose a novel method acquisition 3D scan subject’s obtained using consumer-end RGB-D camera. The segments’ separation by combining skeleton BlazePose...
Time-of-flight cameras are widely adopted in a variety of indoor applications ranging from industrial object measurement to human activity recognition. However, the available products may differ terms quality acquired point cloud, and datasheet provided by constructors not be enough guide researchers choice perfect device for their application. Hence, this work details experimental procedure assess time-of-flight cameras' error sources that should considered when designing an application...
In this paper, we present a smart hand gesture recognition experimental set up for collaborative robots using Faster R-CNN object detector to find the accurate position of hands in RGB images taken from Kinect v2 camera. We used MATLAB code and purposely designed function prediction phase, necessary detecting static gestures way have defined them. performed number experiments with different datasets evaluate performances model situations: basic dataset four by combination both hands, where...
The HANDS dataset has been created for human-robot interaction research, and it is composed of spatially temporally aligned RGB Depth frames. It contains 12 static single-hand gestures performed with both the right-hand left-hand, 3 two-hands a total 29 unique classes. Five actors (two females three males) have acquired performing gestures, each them adopting different background light conditions. For actor, 150 frames their corresponding per gesture collected, 2400 actor. Data collected...
Since cobots are designed to be flexible, they frequently repositioned change the production line according needs; hence, their working area (user frame) needs often calibrated. Therefore, it is important adopt a fast and intuitive user frame calibration method that allows even non-expert users perform procedure effectively, reducing possible mistakes may arise in such contexts. The aim of this work was quantitatively assess performance different procedures terms accuracy, complexity, time,...
This paper presents the validation of a marker-less motion capture system used to evaluate upper limb stress subjects using exoskeletons for locomotion. The fuses human skeletonization provided by commercial 3D cameras with forces exchanged user ground through limbs utilizing instrumented crutches. aim is provide low cost, accurate, and reliable technology useful trainer quantitative evaluation impact assisted gait on subject without need use an lab. reaction at limbs' joints are measured...
In this paper the novel teleoperation method "Hands-Free" is presented. Hands-Free a vision-based augmented reality system that allows users to teleoperate robot end-effector with their hands in real time. The leverages OpenPose neural network detect human operator hand given workspace, achieving an average inference time of 0.15 s. user index position extracted from image and converted world coordinates move different workspace.The skeleton visualized real-time moving actual allowing...
Winter is the season of main concern for beekeepers since temperature, humidity, and potential infection from mites other diseases may lead colony to death. As a consequence, perform invasive checks on colonies, exposing them further harm. This paper proposes novel design an instrumented beehive involving color cameras placed inside at bottom it, paving way new frontiers in monitoring. The overall acquisition system described focusing choices towards effective solution internal, contactless,...
Yield estimation is a key point theme for precision agriculture, especially small fruits and in-field scenarios. This paper focuses on the metrological validation of novel deep-learning model that robustly estimates both number radii grape berries in vineyards using color images, allowing computation visible (and total) volume clusters, which necessary to reach ultimate goal estimating yield production. The proposed algorithm validated by analyzing its performance custom dataset. berries,...
In this paper, a vision system for safety applications in human-robot collaboration is presented. The based on two Time-Of-Flight (TOF) cameras 3D acquisition. point clouds are registered common reference system, and human robot recognition then implemented. Human performed using customized version of the Histogram Oriented Gradient (HOG) algorithm. Robot achieved procedure Kanade-Lucas-Tomasi (KLT) Two strategies have been developed. first one definition suitable comfort zones both operator...
This paper describes the embryonal development stage of a ROS-based application with aim simplify human-robot interaction collaborative workstations. The idea is to command robot voice commands detected by speech recognition algorithms and empower robot's understanding its surroundings perform tasks using visual system capable recognizing simple objects. In this first version has been developed virtually Gazebo simulation software. Some qualitative tests regarding vocal control detection...
This research paper aimed to validate two methods for measuring loads during walking with instrumented crutches: one method estimate partial weight-bearing on the lower limbs and another shoulder joint reactions. Currently, gait laboratories, high-end measurement systems, are used extract kinematic kinetic data, but such facilities expensive not accessible all patients. The proposed uses crutches measure ground reaction forces does require any motion capture devices or force platforms. load...
The management and monitoring of diagnostic routines for the active surveillance colonization antibiotic-resistant bacteria require use advanced data drivers based on field sensors that characterize various phases hospital processes. To this aim, study describes proof concept an integrated system exploiting smart a digital utilizes flow diagrams business process models notation (BPMN) to optimize focus is development validation sensor vision system, which extensively leverages machine...
Determining water stress in plants and how this affects their growth yield quality is not an easy task. This usually done by monitoring the soil's manual measurements or visual inspection to detect any premature sign of wilting. However, these methods are unreliable still require human intervention on field. The proposed instrumented field a step stone towards automated crop means low-cost equipment. tested crops string beans tomatoes. experiment was aimed at determining according specific...
This data article describes the collection process of two sub-datasets comprehending images Apis mellifera captured inside a commercial beehive ("Frame" sub-dataset, 2057 images) and at bottom it ("Bottom" 1494 images). The was collected in spring 2023 (April-May) for "Frame" September "Bottom" sub-dataset. Acquisitions were carried out using an instrumented developed purpose monitoring colony's health status during long periods time. color cameras used equipped with different lenses...
In biomechanics, a still unresolved question is how to estimate with enough accuracy the volume and mass of each body segment subject. This important for several applications ranging from rehabilitation injured subjects study athletic performances via analysis dynamic inertia segment. However, traditionally this evaluation done by referring anthropometric tables or approximating volumes using manual measurements. We propose novel method based on 3D reconstruction subject's commercial...
This paper focuses on the development and evaluation of a portable vision-based acquisition device for vineyards, equipped with GPU-accelerated processing unit. The is designed to perform in-field image acquisitions high-resolution dense information. It includes three vision systems: Intel® RealSense™ depth camera D435i, tracking T265, Basler RGB DART camera. powered by an Nvidia Jetson Nano board both simultaneous data real-time processing. presents two specific tasks which can be useful:...
Counting tasks with overlapping and occluded tar-gets are often tackled by means of neural networks outputting density maps. While this approach has been proven to be highly effective for crowd-counting tasks, it not exploited extensively in other fields (like fruit counting). Furthermore, never used infer the shape or size recognized objects. In paper, we present a novel deep learning-based methodology automatically estimate number grape berries an image evaluate their average radius as...
This paper presents the implementation of a face recognition node for Sawyer collaborative robot using Robot Operating System (ROS), and its preliminary experimental validation. The acquires images through head camera cobot elaborates it OpenCV libraries, applying Viola-Jones algorithm to detect faces. aim is use functionalities make start operations only when correctly recognizes an operator in workspace, thus increasing safety human-robot interaction. developed ROS multiple faces at same...