Justinas Mišeikis

ORCID: 0000-0002-9342-1722
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Robot Manipulation and Learning
  • Advanced Neural Network Applications
  • Robotics and Sensor-Based Localization
  • Robotic Path Planning Algorithms
  • Advanced Vision and Imaging
  • Robotics and Automated Systems
  • Stroke Rehabilitation and Recovery
  • Domain Adaptation and Few-Shot Learning
  • Muscle activation and electromyography studies
  • Autonomous Vehicle Technology and Safety
  • Electric and Hybrid Vehicle Technologies
  • Advanced Battery Technologies Research
  • Soft Robotics and Applications
  • AI in Service Interactions
  • Anomaly Detection Techniques and Applications
  • Human Pose and Action Recognition
  • Image Processing Techniques and Applications
  • Social Robot Interaction and HRI
  • Smart Parking Systems Research
  • Electric Vehicles and Infrastructure
  • Optical measurement and interference techniques
  • Prosthetics and Rehabilitation Robotics
  • Video Surveillance and Tracking Methods
  • Image and Object Detection Techniques

Union Bank of Switzerland
2020

University of Oslo
2014-2019

Hocoma (Switzerland)
2013

MUNDUS is an assistive framework for recovering direct interaction capability of severely motor impaired people based on arm reaching and hand functions. It aims at achieving personalization, modularity maximization the user’s involvement in systems. To this, exploits any residual control end-user can be adapted to level severity or progression disease allowing user voluntarily interact with environment. target pathologies are high-level spinal cord injury (SCI) neurodegenerative genetic...

10.1186/1743-0003-10-66 article EN cc-by Journal of NeuroEngineering and Rehabilitation 2013-01-01

Lio is a mobile robot platform with multi-functional arm explicitly designed for human-robot interaction and personal care assistant tasks. The has already been deployed in several health facilities, where it functioning autonomously, assisting staff patients on an everyday basis. intrinsically safe by having full coverage soft artificial-leather material as well collision detection, limited speed forces. Furthermore, the compliant motion controller. A combination of visual, audio, laser,...

10.1109/lra.2020.3007462 article EN IEEE Robotics and Automation Letters 2020-07-07

With 3D sensing becoming cheaper, environment-aware and visually-guided robot arms capable of safely working in collaboration with humans will become common. However, a reliable calibration is needed, both for camera internal calibration, as well Eye-to-Hand to make sure the whole system functions correctly. We present framework, using novel combination proven methods, allowing quick automatic integration systems consisting varying number cameras by standard checkerboard grid. Our approach...

10.1109/sii.2016.7844087 article EN 2022 IEEE/SICE International Symposium on System Integration (SII) 2016-12-01

Electric vehicles (EVs) and plug-in hybrid (PHEVs) are rapidly gaining popularity on our roads. Besides a comparatively high purchasing price, the main two problems limiting their use short driving range inconvenient charging process. In this paper we address following by presenting an automatic robot-based station with 3D vision guidance for plugging unplugging charger. First of all, whole system concept consisting system, UR10 robot is presented. Then show shape-based matching methods used...

10.48550/arxiv.1703.05381 preprint EN other-oa arXiv (Cornell University) 2017-01-01

The field of collaborative robotics and human-robot interaction often focuses on the prediction human behaviour, while assuming information about robot setup configuration being known. This is case with fixed setups, which have all sensors calibrated in relation to rest system. However, it becomes a limiting factor when system needs be reconfigured or moved. We present deep learning approach, aims solve this issue. Our method learns identify precisely localise 2D camera images, so having no...

10.1109/urai.2018.8441813 preprint EN 2018-06-01

With advancing technologies, robotic manipulators and visual environment sensors are becoming cheaper more widespread. However, robot control can be still a limiting factor for better adaptation of these technologies. Robotic performing very well in structured workspaces, but do not adapt to unexpected changes, like people entering the workspace. We present method combining 3D Camera based workspace mapping, predictive reflexive manipulator trajectory estimation allow efficient safer...

10.1109/ssci.2016.7850237 article EN 2021 IEEE Symposium Series on Computational Intelligence (SSCI) 2016-12-01

Collaborative robots are becoming more common on factory floors as well regular environments, however, their safety still is not a fully solved issue. Collision detection does always perform expected and collision avoidance an active research area. works for fixed robot-camera setups, if they shifted around, Eye-to-Hand calibration becomes invalid making it difficult to accurately run many of the existing algorithms. We approach problem by presenting stand-alone system capable detecting...

10.1109/icra.2019.8794077 article EN 2022 International Conference on Robotics and Automation (ICRA) 2019-05-01

Efficient pedestrian detection is a key aspect of many intelligent vehicles. In this context, vision-based has increased in popularity. Algorithms proposed often consider that the camera mobile (on board vehicle) or static (mounted on infrastructure). contrast, we approach uses information from and cameras jointly. Assuming vehicle which mounted) contains some sort localization capability, combining with yields significantly improved rates. These sources are fairly independent, substantially...

10.1109/tits.2014.2350979 article EN IEEE Transactions on Intelligent Transportation Systems 2014-01-01

A significant problem of using deep learning techniques is the limited amount data available for training. There are some datasets popular problems like item recognition and classification or self-driving cars, however, it very industrial robotics field. In previous work, we have trained a multi-objective Convolutional Neural Network (CNN) to identify robot body in image estimate 3D positions joints by just 2D image, but was range robots produced Universal Robots (UR). this extend our method...

10.1109/iisr.2018.8535937 preprint EN 2018-08-01

Many works in collaborative robotics and human-robot interaction focuses on identifying predicting human behaviour while considering the information about robot itself as given. This can be case when sensors are calibrated relation to each other often reconfiguration of system is not possible, or extra manual work required. We present a deep learning based approach remove constraint having need for vision sensor fixed other. The learns visual cues body able localise it, well estimate...

10.1109/aim.2018.8452236 article EN 2022 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM) 2018-07-01

Collaborative robots are becoming more common on factory floors as well regular environments, however, their safety still is not a fully solved issue. Collision detection does always perform expected and collision avoidance an active research area. works for fixed robot-camera setups, if they shifted around, Eye-to-Hand calibration becomes invalid making it difficult to accurately run many of the existing algorithms. We approach problem by presenting stand-alone system capable detecting...

10.48550/arxiv.1902.05718 preprint EN other-oa arXiv (Cornell University) 2019-01-01
Coming Soon ...