Ceriel J. H. Jacobs

ORCID: 0000-0002-4692-7245
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Parallel Computing and Optimization Techniques
  • Distributed and Parallel Computing Systems
  • Semantic Web and Ontologies
  • Natural Language Processing Techniques
  • Scientific Computing and Data Management
  • Distributed systems and fault tolerance
  • Advanced Database Systems and Queries
  • semigroups and automata theory
  • Algorithms and Data Compression
  • Advanced Data Storage Technologies
  • Logic, Reasoning, and Knowledge
  • Logic, programming, and type systems
  • Advanced Graph Neural Networks
  • Cloud Computing and Resource Management
  • Graph Theory and Algorithms
  • Software Engineering Research
  • DNA and Biological Computing
  • Linguistics and Discourse Analysis
  • Data Quality and Management
  • Interconnection Networks and Systems
  • Optimization and Search Problems
  • Embedded Systems Design Techniques
  • Advanced Algebra and Logic
  • Constraint Satisfaction and Optimization
  • Context-Aware Activity Recognition Systems

Vrije Universiteit Amsterdam
2010-2020

University of Amsterdam
1988-2007

Abstract In computational Grids, performance‐hungry applications need to simultaneously tap the power of multiple, dynamically available sites. The crux designing Grid programming environments stems exactly from dynamic availability compute cycles: (a) be portable run on as many sites possible, (b) they flexible cope with different network protocols and changing groups nodes, while (c) provide efficient (local) communication that enables high‐performance computing in first place. Existing...

10.1002/cpe.860 article EN Concurrency and Computation Practice and Experience 2005-02-23

Orca is a portable, object-based distributed shared memory (DSM) system. This article studies and evaluates the design choices made in system compares with other DSMs. The gives quantitative analysis of Orca's coherence protocol (based on write-updates function shipping), totally ordered group communication protocol, strategy for object placement, all-software, user-space architecture. Performance measurements 10 parallel applications illustrate trade-offs show that essentially right...

10.1145/273011.273014 article EN ACM Transactions on Computer Systems 1998-02-01

Java offers interesting opportunities for parallel computing. In particular, Remote Method Invocation (RMI) provides a flexible kind of remote procedure call (RPC) that supports polymorphism. Sun's RMI implementation achieves this flexibility at the cost major runtime overhead. The goal article is to show can be implemented efficiently, while still supporting polymorphism and allowing interoperability with Virtual Machines (JVMs). We study new approach implementing RMI, using compiler-based...

10.1145/506315.506317 article EN ACM Transactions on Programming Languages and Systems 2001-11-01

The evaluation of Datalog rules over large Knowledge Graphs (KGs) is essential for many applications. In this paper, we present a new method materializing inferences, which combines column-based memory layout with novel optimization methods that avoid redundant inferences at runtime. pro-active caching certain subqueries further increases efficiency. Our empirical shows approach can often match or even surpass the performance state-of-the-art systems, especially under restricted resources.

10.1609/aaai.v30i1.9993 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2016-02-21

Computational grids have an enormous potential to provide compute power. However, this power remains largely unexploited today for most applications, except trivially parallel programs. Developing grid applications simply is too difficult. Grids introduce several problems not encountered before, mainly due the highly heterogeneous and dynamic computing networking environment. Furthermore, failures occur frequently, resources may be claimed by higher-priority jobs at any time. In article, we...

10.1145/1709093.1709096 article EN ACM Transactions on Programming Languages and Systems 2010-03-01

The use of parallel and distributed computing systems is essential to meet the ever-increasing computational demands many scientific industrial applications. Ibis allows easy programming deployment compute-intensive applications, even for dynamic, faulty, heterogeneous environments.

10.1109/mc.2010.184 article EN Computer 2010-07-01

Currently, MapReduce is the most popular programming model for large-scale data processing and this motivated research community to improve its efficiency either with new extensions, algorithmic optimizations, or hardware. In paper we address two main limitations of MapReduce: one relates model's limited expressiveness, which prevents implementation complex programs that require multiple steps iterations. The other implementations (e.g., Hadoop), provide good resource utilization only...

10.1109/icdcs.2014.62 article EN 2014-06-01

This paper describes and evaluates the use of aggressive static analysis in Jackal, a fine-grain Distributed Shared Memory (DSM) system for Java. Jackal uses an optimizing, source-level compiler rather than binary rewriting techniques employed by most other DSM systems. Source-level makes existing access-check optimizations (e.g., batching) more effective enables two novel optimizations: object-graph aggregation automatic computation migration.

10.1145/379539.379578 article EN 2001-06-18

Many programming languages support either task parallelism, but few provide a uniform framework for writing applications that need both types of parallelism or data parallelism. We present language and system integrates using shared objects. Shared objects may be stored on one processor replicated. Objects also partitioned distributed several processors.Task is achieved by forking processes remotely have them communicate synchronize through Data executing operations in parallel. Writing...

10.1145/295656.295658 article EN ACM Transactions on Programming Languages and Systems 1998-11-01

New generations of many-core hardware become available frequently and are typically attractive extensions for data-centers because power-consumption performance benefits. As a result, supercomputers clusters becoming heterogeneous start to contain variety devices. Obtaining from homogeneous cluster-computer is already challenging, but achieving it cluster even more demanding. Related work primarily focuses on clusters. In this paper we present Cashmere, programming system Cashmere tight...

10.1109/ipdps.2015.38 article EN 2015-05-01

The increasing availability and usage of Knowledge Graphs (KGs) on the Web calls for scalable general-purpose solutions to store this type data structures. We propose Trident, a novel storage architecture very large KGs centralized systems. Trident uses several interlinked structures provide fast access nodes edges, with physical changing depending topology graph reduce memory footprint. In contrast single architectures designed tasks, our approach offers an interface few low-level...

10.1145/3366423.3380246 article EN 2020-04-20

Analyzing digital images is an important investigation in forensics with the ever increasing number of from computers and smartphones. In this article we aim to advance state-of-the-art common image source identification (which originate same camera). To end, present two types applications for different goals that make use a) a modern Desktop computer GPU b) highly heterogeneous cluster many kinds GPUs, something call computing jungles. The first application targets medium-scale...

10.1016/j.diin.2018.09.002 article EN cc-by Digital Investigation 2018-09-18
Coming Soon ...