Daniel Holmes

ORCID: 0000-0002-2776-2609
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Parallel Computing and Optimization Techniques
  • Distributed and Parallel Computing Systems
  • Advanced Data Storage Technologies
  • Embedded Systems Design Techniques
  • Distributed systems and fault tolerance
  • Real-Time Systems Scheduling
  • Modular Robots and Swarm Intelligence
  • Advanced Memory and Neural Computing
  • Ferroelectric and Negative Capacitance Devices
  • Cloud Computing and Resource Management
  • Algorithms and Data Compression

Mission College
2024

Intel (United States)
2024

Holmes Community College
2022

University of Edinburgh
2013-2020

El Paso Community College
2016-2020

Barcelona Supercomputing Center
2018

Universitat Politècnica de Catalunya
2018

Data streaming model is an effective way to tackle the challenge of data-intensive applications. As traditional HPC applications generate large volume data and more move infrastructures, it necessary investigate feasibility combining message-passing programming models. MPI, de facto standard for on HPC, cannot intuitively express communication pattern functional operations required in In this work, we designed implemented a library MPIStream atop MPI allocate producers consumers, stream...

10.1145/2831129.2831131 article EN 2015-11-09

In this paper we propose an API to pause and resume task execution depending on external events. We leverage generic improve the interoperability between MPI synchronous communication primitives tasks. When operation blocks, running is paused so that runtime system can schedule a new core became idle. Once completed, put again system's ready queue. expose our proposal through threading level which implement two approaches.

10.1145/3236367.3236382 article EN 2018-09-19

MPI includes all processes in MPI_COMM_WORLD; this is untenable for reasons of scale, resiliency, and overhead. This paper offers a new approach, extending with concept called Sessions, which makes two key contributions: tighter integration the underlying runtime system; scalable route to communication groups. fundamental change how we organise address that removes well-known scalability barriers by no longer requiring global communicator MPI_COMM_WORLD.

10.1145/2966884.2966915 article EN 2016-09-25

This paper offers a timely study and proposed clarifications, revisions, enhancements to the Message Passing Interface's (MPI's) Semantic Terms Conventions. To enhance MPI, clearer understanding of meaning key terminology has proven essential, and, surprisingly, important concepts remain underspecified, ambiguous in some cases, inconsistent and/or conflicting despite 26 years standardization. work addresses these concerns comprehensively usefully informs MPI developers, implementors, those...

10.1145/3343211.3343213 preprint EN 2019-08-23

The recently proposed MPI Sessions extensions to the standard present a new paradigm for applications use with MPI. has potential address several limitations of MPI's current specification: cannot be initialized within an process from different application components without priori knowledge or coordination; more than once; and, reinitialized after finalization. also offers possibility flexible ways individual express capabilities they require at finer granularity is presently possible.At...

10.1109/cluster.2019.8891002 article EN 2019-09-01

Advantages of nonblocking collective communication in MPI have been established over the past quarter century, even predating MPI-1. For regular computations with fixed patterns, more optimizations can be revealed through use persistence (planned transfers) not currently available MPI-3 API except for a limited form point-to-point (aka half-channels) standardized since This paper covers design, prototype implementation LibPNBC (based on LibNBC), and MPI-4 standardization status persistent...

10.1145/3127024.3127028 article EN 2017-08-24

Partitioned point-to-point communication and persistent collective were both recently standardized in MPI-4.0. Each offers performance scalability advantages over MPI-3.1-based when planned transfers are feasible an MPI application. Their merger into a generalized, with partitions is logical next step, significant for portability. Non-trivial decisions about the syntax semantics of such operations need to be addressed, including scope knowledge partitioning choices by members communicator's...

10.1109/exampi54564.2021.00007 article EN 2021-11-01

Strong progress is optional in MPI. MPI allows implementations where (for example, updating the message-transport state machines or interaction with network devices) only made during certain procedure calls. Generally speaking, strong implies ability to achieve (to transport data through from senders receivers and exchange protocol messages) without explicit calls user processes procedures. For instance, given a send that matches pre-posted receive on receiving process moved source...

10.1145/3416315.3416318 article EN 2020-09-21

This paper presents McMPI, an entirely new MPI library written in C# using only safe managed-code, and performance results from low-level benchmarks demonstrating ping-pong latency bandwidth comparable with MS-MPI MPICH2.

10.1145/2488551.2488572 article EN 2013-09-11

A major reason for the success of MPI as standard large-scale, distributed memory programming is economy and orthogonality key concepts. These very design principles suggest leaner better support stencil-like, sparse collective communication, while at same time reducing significantly number concrete operation interfaces, extending functionality that can be supported by high-quality implementations, provisioning possible future, much more wide-ranging functionality.

10.1145/3416315.3416319 preprint EN 2020-09-21

Composability is one of seven reasons for the long-standing and continuing success MPI. Extending MPI by composing its operations with user-level provides useful integration progress engine completion notification methods However, existing extensibility mechanism in (generalized requests) not widely utilized has significant drawbacks. can be generalized via scheduled communication primitives, example, utilizing implementation techniques from MPI-3 nonblocking collectives forthcoming MPI-4...

10.1109/hipc.2019.00043 article EN 2019-12-01

Composability is one of seven reasons for the long-standing and continuing success MPI. Extending MPI by composing its operations with user-level provides useful integration progress engine completion notification methods However, existing extensibility mechanism in (generalized requests) not widely utilized has significant drawbacks. can be generalized via scheduled communication primitives, example, utilizing implementation techniques from MPI-3 nonblocking collectives forthcoming MPI-4...

10.48550/arxiv.1909.11762 preprint EN other-oa arXiv (Cornell University) 2019-01-01
Coming Soon ...