Ashley Feniello

ORCID: 0000-0003-4975-0462
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Virtual Reality Applications and Impacts
  • Robotics and Sensor-Based Localization
  • Augmented Reality Applications
  • Reinforcement Learning in Robotics
  • Multimodal Machine Learning Applications
  • Context-Aware Activity Recognition Systems
  • Indoor and Outdoor Localization Technologies
  • Time Series Analysis and Forecasting
  • Multimedia Communication and Technology
  • Human Pose and Action Recognition
  • Visual Attention and Saliency Detection
  • 3D Surveying and Cultural Heritage
  • Semantic Web and Ontologies
  • AI-based Problem Solving and Planning
  • Scientific Computing and Data Management
  • Data Visualization and Analytics
  • AI in Service Interactions
  • Social Robot Interaction and HRI
  • Robot Manipulation and Learning

Microsoft Research (United Kingdom)
2022-2023

Microsoft (United States)
2014-2022

Building an interactive AI assistant that can perceive, reason, and collaborate with humans in the real world has been a long-standing pursuit community. This work is part of broader research effort to develop intelligent agents interactively guide through performing tasks physical world. As first step this direction, we introduce HoloAssist, large-scale egocentric human interaction dataset, where two people collaboratively complete manipulation tasks. The task performer executes while...

10.1109/iccv51070.2023.01854 article EN 2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2023-10-01

We introduce Platform for Situated Intelligence, an open-source framework created to support the rapid development and study of multimodal, integrative-AI systems. The provides infrastructure sensing, fusing, making inferences from temporal streams data across different modalities, a set tools that enable visualization debugging, ecosystem components encapsulate variety perception processing technologies. These assets jointly provide means rapidly constructing refining systems, while...

10.48550/arxiv.2103.15975 preprint EN other-oa arXiv (Cornell University) 2021-01-01

Practical mapping and navigation solutions for large indoor environments continue to rely on relatively expensive range scanners, because of their accuracy, field view. Microsoft Kinect the other hand is inexpensive, easy use has high resolution, but suffers from noise, shorter a limiting We present system that uses sensor as sole source data achieves performance comparable state-of-the-art LIDAR-based systems. show how we circumvent main limitations generate usable 2D maps spaces enable...

10.1109/icra.2015.7139225 article EN 2015-05-01

We address the problem of synthesizing human-readable computer programs for robotic object repositioning tasks based on human demonstrations. A stack-based domain specific language (DSL) is introduced tasks, and a learning algorithm proposed to synthesize program in this DSL Once synthesized has been learned, it can be rapidly verified refined simulator via further demonstrations if necessary, then finally executed an actual robot accomplish corresponding learned physical world. By...

10.1109/iros.2014.6943189 article EN 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems 2014-09-01

We demonstrate an open, extensible framework for enabling faster development and study of physically situated interactive systems. The provides a programming model parallel coordinated computation centered on temporal streams data, set tools data visualization processing, open ecosystem components. demonstration showcases interaction toolkit components systems that interact with people via natural language in the world.

10.1109/hri.2019.8673067 article EN 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI) 2019-03-01

Both industry and academic interest in mixed reality has skyrocketed recent years. New headset devices for both virtual augmented are increasingly available affordable, new APIs, tools, frameworks enable developers researchers to more easily create applications. While many tools aim make it easier interact with content rendered the head-set, these interesting not just from an output, but also input perspective-they contain powerful multimodal sensors that provide unique opportunities drive...

10.1109/vrw55335.2022.00018 article EN 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW) 2022-03-01

We introduce a mixed-reality, interactive approach for continually learning to recognize an open-ended set of objects in user's surrounding environment. The proposed leverages the multimodal sensing, interaction, and rendering affordances mixed-reality headset, enables users label nearby via speech, gaze, gestures. Image views each labeled object are automatically captured from varying viewpoints over time, as user goes about their everyday tasks. labels provided by can be propagated forward...

10.1145/3536221.3556567 article EN 2022-11-04

Building an interactive AI assistant that can perceive, reason, and collaborate with humans in the real world has been a long-standing pursuit community. This work is part of broader research effort to develop intelligent agents interactively guide through performing tasks physical world. As first step this direction, we introduce HoloAssist, large-scale egocentric human interaction dataset, where two people collaboratively complete manipulation tasks. The task performer executes while...

10.48550/arxiv.2309.17024 preprint EN cc-by arXiv (Cornell University) 2023-01-01
Coming Soon ...