Alberto Uriarte

ORCID: 0000-0002-5302-7624
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Artificial Intelligence in Games
  • Digital Games and Media
  • Reinforcement Learning in Robotics
  • AI-based Problem Solving and Planning
  • Sports Analytics and Performance
  • Robotic Path Planning Algorithms
  • Constraint Satisfaction and Optimization
  • Educational Games and Gamification
  • Human Motion and Animation
  • Evacuation and Crowd Dynamics
  • Video Analysis and Summarization
  • Data Management and Algorithms
  • Optimization and Search Problems
  • Social Sciences and Policies
  • Gambling Behavior and Treatments
  • Logic, Reasoning, and Knowledge
  • Multi-Agent Systems and Negotiation
  • Aging, Health, and Disability
  • Computational Geometry and Mesh Generation
  • Labor Law and Work Dynamics

Drexel University
2013-2024

Laboratoire d'Informatique de Paris-Nord
2021

This paper presents an overview of the existing work on AI for real-time strategy (RTS) games. Specifically, we focus around game StarCraft, which has emerged in past few years as unified test bed this research. We describe specific challenges posed by RTS games, and solutions that have been explored to address them. Additionally, also present a summary results recent StarCraft competitions, describing architectures used participants. Finally, conclude with discussion emphasizing problems...

10.1109/tciaig.2013.2286295 article EN IEEE Transactions on Computational Intelligence and AI in Games 2013-10-18

Influence Maps have been successfully used in controlling the navigation of multiple units. In this paper, we apply idea to problem simulating a kiting behavior (also known as ¨attack and flee'¨) context real-time strategy (RTS) games. We present our approach evaluate it popular RTS game StarCraft, where analyze benefits that brings StarCraft playing bot.

10.1609/aiide.v8i3.12544 article EN Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 2021-06-30

From an AI point of view, Real-Time Strategy (RTS) games are hard because they have enormous state spaces, real-time and partially observable. In this paper, we present approach to deploy game-tree search in RTS by using game abstraction. We propose a high-level abstract representation the state, that significantly reduces branching factor when used for algorithms. Using representation, evaluate versions alpha-beta Monte Carlo Tree Search (MCTS). experiments context StarCraft showing...

10.1609/aiide.v10i1.12706 article EN Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 2021-06-29

Game tree search algorithms, such as Monte Carlo Tree Search (MCTS), require access to a forward model (or "simulator") of the game at hand. However, in some games is not readily available. In this paper we address problem automatically learning models (more specifically, combats models) for two-player attrition games. We report experiments comparing several approaches learn combat from replay data generated by use StarCraft, Real-Time Strategy (RTS) game, our application domain....

10.1609/aiide.v11i1.12793 article EN Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 2021-06-24

Applying game-tree search techniques to RTS games poses a significant challenge, given the large branching factors involved. This paper studies an approach incorporate knowledge learned offline from game replays guide process. Specifically, we propose learn Naive Bayesian models predicting probability of action execution in different states, and use them inform process Monte Carlo Tree Search. We evaluate effect incorporating these into several Multiarmed Bandit policies for MCTS context...

10.1609/aiide.v12i1.12852 article EN Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 2021-06-25

From an AI point of view, Real-Time Strategy (RTS) games are hard because they have enormous state spaces, real-time and partially observable. In this paper, we explore approach to deploy game-tree search in RTS by using game abstraction, the effect different abstractions over state. Different capture parts state, result branching factors when used for algorithms. We evaluate representations Monte Carlo Tree Search context StarCraft.

10.1609/aiide.v10i2.12734 article EN Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 2021-06-29

Game-tree search algorithms, such as Monte Carlo tree search, require access to a forward model (or “simulator”) of the game at hand. However, in some games is not readily available. This paper presents three models for two-player attrition games, which we call “combat models,” and show how they can be used simulate combat real-time strategy games. We also these learned from replay data. use STARCRAFT our application domain. report experiments comparing predicting results their impact when...

10.1109/tciaig.2017.2669895 article EN IEEE Transactions on Games 2017-02-15

This paper presents GHOST, a combinatorial optimization framework that real-time strategy (RTS) AI developer can use to model and solve any problem encoded as constraint satisfaction/optimization (CSP/COP). We show way three different problems CSP/COP, using instances from the RTS game StarCraft test beds. Each belongs specific level of abstraction (the target selection reactive control problem, wall-in tactics build order planning problem). In our experiments, GHOST shows good results...

10.1109/tciaig.2016.2573199 article EN IEEE Transactions on Computational Intelligence and AI in Games 2016-05-26

Designing a well balanced map for real-time strategy game might be time consuming. This paper presents an algorithm, called PSMAGE, generating maps the popular (RTS) StarCraft. Our approach uses Voronoi diagrams to generate initial layout, and then assigns different properties each of regions in diagram. Additionally, PSMAGE includes collection evaluation metrics, aimed at measuring how is.

10.1109/cig.2013.6633644 article EN 2013-08-01

Real-Time Strategy (RTS) games pose a big challenge due their large branching factor and real-time nature. This is even bigger if we consider partially observable RTS to the fog-of-war. paper focuses on extending Monte Carlo Tree Search (MCTS) algorithms for settings. Specifically, investigate sampling single believe state consistent with perfect memory of all past observations in current game, using it perform MCTS. We evaluate performance this approach μRTS game simulator, showing that...

10.1109/cig.2017.8080449 article EN 2017-08-01

This paper presents a constraint optimization approach to walling in real-time strategy (RTS) games. Walling is specific type of spatial reasoning, typically employed by human expert players and not currently fully exploited RTS game AI, consisting on finding configurations buildings completely or partially block paths. Our based local search, specifically designed for the nature We present experiments context StarCraft showing promising results.

10.1609/aiide.v10i1.12704 article EN Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 2021-06-29

This paper presents a new terrain analysis algorithm for RTS games. The proposed algorithms significantly improves the time of state art via contour tracing, and also offers better chokepoint detection. We demonstrate that our approach (BWTA2) is at least 10 times faster than commonly used BWTA in collection StarCraft maps. Additionally, we show usefulness tasks such as pathfinding discuss potential applications to strategic decision making tasks.

10.1609/aiide.v12i2.12889 article EN Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 2021-06-25

Smartphone games lack the hardware interface afforded by other gaming media like controllers for consoles, keyboard and mouse PCs, joysticks buttons on arcade cabinets, etc. As such, many popular focus puzzle mechanics using touch screen interface, such as Angry Birds[1] or Cut Rope[2]. We focused skill-based, reactionary gameplay with an intuitive unique control scheme in Herbert, where player moves character around world tilting device free oneself from traps shaking device. did this order...

10.1145/2658537.2662982 article EN 2014-10-19

Game tree search algorithms, such as Monte Carlo Tree Search (MCTS), require access to a forward model (or "simulator") of the game at hand. However, in some games is not readily available. This paper presents three models for two-player attrition games, which we call "combat models", and show how they can be used simulate combat RTS games. We also these learned from replay data. use StarCraft our application domain. report experiments comparing predicting output their impact when tactical...

10.48550/arxiv.1605.05305 preprint EN other-oa arXiv (Cornell University) 2016-01-01

The problem of comparing the performance different Real-Time Strategy (RTS) Intelligent Agents (IA) is non-trivial. And often research groups employ testing methodologies designed to test specific aspects agents. However, lack a standard process evaluate and compare methods in same context makes progress assessment difficult. In order address this problem, paper presents set benchmark scenarios metrics aimed at evaluating techniques or agents for RTS game StarCraft. We used these collection...

10.1609/aiide.v11i2.12810 article EN Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 2021-06-24

A significant amount of work exists on handling partial observability for different game genres in the context tree search. However, most those techniques do not scale up to RTS games. In this paper we present an experimental evaluation a recently proposed technique, "single believe state generation," StarCraft. We evaluate approach StarCraft playing bot and show that technique is enough bring performance close theoretical optimal where can observe whole state.

10.1609/aiide.v13i2.12960 article EN Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 2021-06-25

Standard game tree search algorithms, such as minimax or Monte Carlo Tree Search, assume the existence of an accurate forward model that simulates effects actions in game. Creating model, however, is a challenge itself.One cause complexity task gap level abstraction between informal specification and its implementation language. To overcome this issue, we propose technique for models relies on Answer Set Programming paradigm well-established knowledge representation techniques from...

10.1609/aiide.v11i2.12812 article EN Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 2021-06-24
Coming Soon ...