- Simulation Techniques and Applications
- Ancient Mediterranean Archaeology and History
- Archaeology and Historical Studies
- Distributed and Parallel Computing Systems
- Distributed systems and fault tolerance
- Parallel Computing and Optimization Techniques
- Advanced Data Storage Technologies
- Scientific Computing and Data Management
- Software Reliability and Analysis Research
- Software Engineering Research
- Software Testing and Debugging Techniques
- Advanced Database Systems and Queries
- Business Process Modeling and Analysis
- Medieval Architecture and Archaeology
- Archaeological and Historical Studies
- Model-Driven Software Engineering Techniques
- Ancient and Medieval Archaeology Studies
- Advanced Software Engineering Methodologies
- Advanced Queuing Theory Analysis
- Software System Performance and Reliability
- Classical Antiquity Studies
- Archaeological Research and Protection
- Ancient Egypt and Archaeology
- Interconnection Networks and Systems
- Global Maritime and Colonial Histories
Institució Catalana de Recerca i Estudis Avançats
2020-2022
Universitat de Barcelona
2019-2021
Royal Adelaide Hospital
2016
The University of Adelaide
2016
University of Virginia
2004-2015
University of Edinburgh
2013
Northwestern University
2011
Williams & Associates
2011
American University of Beirut
2003-2006
Mitre (United States)
2002
Numerous treatises exist that define appropriate qualities should be exhibited by a well written software requirements specification (SRS). In most cases these are vaguely defined. The paper explores thoroughly the concept of quality in an SRS and defines attributes contribute to quality. Techniques for measuring suggested.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">></ETX>
Conventional wisdom has it there are two basic approaches to parallel simulation: conservative (Chandy-Misra) and optimistic (time warp). All known protocols thought fall into one of these classes. This dichotomy is false. There exists a spectrum options that includes approaches. We describe design space admits as alternatives, we show how most the well simulation can be derived using our explore implications existence describe. In particular, note many yet unexplored simulation.
Simulations that run at multiple levels of resolution often encounter consistency problems because insufficient correlation between the attributes same entity. Inconsistency may occur despite existence valid models each level. Cross-Resolution Modeling (CRM) attempts to build effective multiresolution simulations. The traditional approach CRM—aggregation-disaggregation—causes chain disaggregation and puts an unacceptable burden on resources. We present four fundamental observations would...
We propose a distributed simulation method which is particularly well suited for the of large synchronous networks. In general, has significant potential alleviating time and memory constraints often encountered when using conventional techniques. Currently proposed methods suffer performance degradation due to employment strategies preventing deadlock artificial blocking processes. describe new doing free whenever physical system being simulated free. Furthermore, our does not from
We introduce a new class of synchronization protocols for parallel discrete event simulation, those based on near-perfect state information (NPSI). NPSI are adaptive dynamically controlling the rate at which processes constituting simulation proceed with goal completing efficiently. show by analysis that (that includes and several others) can both arbitrarily outperform be outperformed Time Warp protocol. This mixed result substantiates promising results we other protocol designers have...
A large class of data parallel computations is characterized by a sequence phases, with phase changes occurring unpredictably. Dynamic remapping the workload to processors may be required maintain good performance. The problem considered, for which utility and future behavior are uncertain, arises when phases exhibit stable execution requirements during given phase, but change radically between phases. For these situations, assignment generated one hinder performance next phase. This treated...
Modeling of dealloying has often used a local bond-breaking approach to define the energy barrier simulate dissolution and surface diffusion. The barriers are tacitly assumed be independent solution chemistry at metal/solution interface. In this work, an interaction parameter is added model that accounts for species-specific physics actual atom-water molecule, atom-ion interactions allows complex atomistic behavior abstracted in modeling diffusion processes. Variations electrolyte components...
Synchronization errors in concurrent programs are notoriously difficult to find and correct. Deadlock, partial deadlock, unsafeness conditions that constitute such errors. A model of semaphore based on multidimensional, solid geometry is presented. While previously reported geometric models restricted two-process mutual exclusion problems, the described here applies a broader class synchronization problems. The shown be exact for systems composed an arbitrary, yet fixed number processes,...
How much effort will be required to compose or reuse simulations? What factors need considered? It is generally known that composability and reusability are daunting challenges for both simulations more broadly software design as a whole. We have conducted small case study in order clarify the role model context plays simulation reusability. For simple problem: compute position velocity of falling body, we found reasonable formulation solution included surprising number implicit constraints....
The objective of this paper is to relate models multi-tasking in which task times are known or be equal unknown. We study bounds on completion and the applicability optimal deterministic schedules probabilistic models. Level algorithms shown for forest precedence graphs independent identically distributed exponential Erlang random variables. A time sharing system simulation shows that could reduce response insensitive scheduling disciplines.
Traditional debugging and fault localization methods have addressed of sources software failures. While these are effective in general, they not tailored to an important class software, including simulations computational models, which employ floating-point computations continuous stochastic distributions represent, or support evaluation of, underlying model. To address this shortcoming, we introduce elastic predicates, a novel approach predicate-based statistical debugging. Elastic...
Adaptive approaches to synchronization in parallel discrete event simulations hold significant potential for performance improvement. We contend that an adaptive approach based on low cost near-perfect system state information is the most likely yield a consistently efficient algorithm. suggest framework by which NPSI (near-perfect information) protocols could be designed and describe first such protocol-elastic time present results show are very promising. In particular, they have capacity...
Statistical debuggers use data collected during test case execution to automatically identify the location of faults within software. Recent work has applied causal inference eliminate or reduce control and flow dependence confounding bias in statement-level statistical debuggers. The result is improved effectiveness. This encouraging but motivates two novel questions: (1) how can be predicate-level (2) what other biases eliminated reduced. Here we address both questions by providing a model...
Statistical debuggers use data collected during test case execution to automatically identify the location of faults within software. Recent work has applied causal inference eliminate or reduce control and flow dependence confounding bias in statement-level statistical debuggers. The result is improved effectiveness. This encouraging but motivates two novel questions: (1) how can be predicate-level (2) what other biases eliminated reduced. Here we address both questions by providing a model...
Predictions from simulations have entered the mainstream of public policy and decision-making practices. Unfortunately, methods for gaining insight into faulty outputs not kept pace. Ideally, an gathering method would automatically identify cause a output explain to simulation developer how correct it. In field software engineering, this challenge has been addressed general-purpose through statistical debuggers. We present two research contributions, elastic predicates many-valued labeling...
Cet article présente une synthèse du large éventail de la culture matérielle l'Hispanie des vie et viie siècles (péninsule Ibérique Baléares), période qui couvre l'occupation l'expansion wisigothiques ainsi que reconquête byzantine. Y sont présentés les vestimenta, armamenta outils agricoles, objets liturgiques, textiles, le mobilier décor ecclésiastiques, poterie locale d'importation (vaisselle table, amphores, vaisselle cuisine). également abordés question commerce à longue distance,...
This article studies an analytic model of parallel discrete-event simulation, comparing the YAWNS conservative synchronization protocol with Bounded Time Warp. The assumed simulation problem is a heavily loaded queuing network where probability idle server closed to zero. We workload and job routing in standard ways, then develop validate methods for computing approximated performance measures as function degree optimism allowed, overhead costs state-saving, rollback, barrier...
Multi-resolution representation of simulated entities is considered essential for a growing portion distributed simulations. Heretofore, modelers have represented entites at just one level resolution, or concurrent representations in an inconsistent manner. We address the question cost maintaining multiple, representations. present brief overview our concept Multiple Resolution Entity (MRE) and Attribute Dependency Graph (ADG) both originally described elsewhere, then compare simulation...
The prevention of deadlock in certain types distributed simulation systems requires special synchronization protocols. These protocols often create an excessive amount performance-degrading communication; yet a protocol with the minimum communication may not lead to fastest network finishing time. We propose that attempts balance network's need for auxiliary information cost providing information. Using empirical study we demonstrate efficiency this protocol. Also, show requirements at...
Emergent behaviors in simulations require explanation, so that valid can be separated from design or coding errors. We present a taxonomy, to applied emergent of unknown validity. Our goal is facilitate the explanation process. Once user identifies an behavior as certain type within our exploration commence manner befitting type. Exploration based on supports narrowing possibilities and suggests methods, thus facilitating Ideally, taxonomy would robust, allowing reasonable variation...
We introduce a class of networks called Isotach designed to reduce the cost synchronization in parallel computations. maintain an invariant that allows each process control logical times at which its messages are received and consequently executed. This processes pipeline operations without sacrificing sequential consistency send isochrons, groups appear be executed as indivisible unit. Isochrons allow execute atomic actions locks. Other uses include ensuring causal message delivery among...
Emergent behaviors in simulations require explanation, so that valid can be separated from design or coding errors. Validation of emergent behavior requires accumulation insight into the and conditions under which it arises. Previously, we have introduced an approach, Explanation Exploration (EE), to gather using semi-automatic model adaptation. We improve our previous work by iteratively applying causal inference procedures samples gathered Iterative application reveals interactions...