Robert W. Wisniewski

ORCID: 0000-0001-7393-0813
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Parallel Computing and Optimization Techniques
  • Distributed and Parallel Computing Systems
  • Advanced Data Storage Technologies
  • Distributed systems and fault tolerance
  • Cloud Computing and Resource Management
  • Historical and Religious Studies of Rome
  • Embedded Systems Design Techniques
  • Classical Antiquity Studies
  • Interconnection Networks and Systems
  • Software System Performance and Reliability
  • Real-Time Systems Scheduling
  • Byzantine Studies and History
  • Historical, Religious, and Philosophical Studies
  • Advanced Software Engineering Methodologies
  • Historical and Archaeological Studies
  • Ancient Mediterranean Archaeology and History
  • Historical and Architectural Studies
  • Security and Verification in Computing
  • Historical and Linguistic Studies
  • Polish Historical and Cultural Studies
  • Logic, programming, and type systems
  • Central European Literary Studies
  • Language and Culture
  • Medieval Literature and History
  • Archaeology and Historical Studies

Hewlett Packard Enterprise (United States)
2024

University of Warsaw
2016-2024

Cracow University of Technology
2024

Intel (United States)
2012-2021

IBM (United States)
2004-2019

University of Tennessee at Knoxville
2011-2018

Intel (United Kingdom)
2014-2016

IBM Research - Thomas J. Watson Research Center
2005-2013

Argonne National Laboratory
2013

Alliance for Safe Kids
2011

Over the last 20 years, open-source community has provided more and software on which world’s high-performance computing systems depend for performance productivity. The invested millions of dollars years effort to build key components. However, although investments in these separate elements have been tremendously valuable, a great deal productivity also lost because lack planning, coordination, integration technologies necessary make them work together smoothly efficiently, both within...

10.1177/1094342010391989 article EN The International Journal of High Performance Computing Applications 2011-01-06

We present here a report produced by workshop on ‘Addressing failures in exascale computing’ held Park City, Utah, 4–11 August 2012. The charter of this was to establish common taxonomy about resilience across all the levels computing system, discuss existing knowledge various hardware and software layers an build those results, examining potential solutions from both perspective focusing combined approach. brought together participants with expertise applications, system software, hardware;...

10.1177/1094342014522573 article EN The International Journal of High Performance Computing Applications 2014-03-21

Blue Gene/Q aims to build a massively parallel high-performance computing system out of power-efficient processor chips, resulting in power-efficient, cost-efficient, and floor-space- efficient systems. Focusing on reliability during design helps with scaling large systems lowers the total cost ownership. This article examines architecture Compute chip, which combines processors, memory, communication functions single chip.

10.1109/mm.2011.108 article EN IEEE Micro 2011-12-20

Scale-up solutions in the form of large SMPs have represented mainstream commercial computing for past several years. The major server vendors continue to provide increasingly larger and more powerful machines. More recently, scale-out solutions, clusters smaller machines, gained increased acceptance computing. Scale-out are particularly effective high-throughput Web-centric applications. In this paper, we investigate behavior two competing approaches parallelism, scale-up scale-out, an...

10.1109/ipdps.2007.370631 article EN 2007-01-01

Over the past four years, Big Data and Exascale Computing (BDEC) project organized a series of five international workshops that aimed to explore ways in which new forms data-centric discovery introduced by ongoing revolution high-end data analysis (HDA) might be integrated with established, simulation-centric paradigm high-performance computing (HPC) community. Based on those meetings, we argue rapid proliferation digital generators, unprecedented growth volume diversity they generate,...

10.1177/1094342018778123 article EN The International Journal of High Performance Computing Applications 2018-07-01

K42 is one of the few recent research projects that examining operating system design structure issues in context new whole-system design. open source and was designed from ground up to perform well be scalable, customizable, maintainable. The project begun 1996 by a team at IBM Research. Over last nine years there has been development effort on between six twenty researchers developers across IBM, collaborating universities, national laboratories. supports Linux API ABI, able run unmodified...

10.1145/1217935.1217949 article EN 2006-04-18

Autonomic computing systems are designed to be self-diagnosing and self-healing, such that they detect performance correctness problems, identify their causes, apply the appropriate remedy. These abilities can improve performance, uptime, security, while simultaneously reducing effort skills required of system administrators. One way support these is by allowing monitoring code, diagnostic function implementations dynamically inserted removed in live systems. This "hot swapping" avoids...

10.1147/sj.421.0060 article EN IBM Systems Journal 2003-01-01

Hardware performance counters (HPCs) are increasingly being used to analyze and identify the causes of bottlenecks. However, HPCs difficult use for several reasons. Microprocessors do not provide enough simultaneously monitor many different types events needed form an over-all understanding performance. Moreover, primarily count low-level micro-architectural from which it is extract high-level insight required identifying problems.We describe two techniques that help overcome these...

10.1145/1088149.1088163 article EN 2005-06-20

The Petascale era has recently been ushered in and many researchers have already turned their attention to the challenges of exascale computing. To achieve petascale computing two broad approaches for kernels were taken, a lightweight approach embodied by IBM Blue Gene's CNK, more fullweight Cray's CNL. There are strengths weaknesses each approach. Examining current generation can provide insight as what mechanisms may be needed generation. contributions this paper experiences we had with...

10.1109/sc.2010.22 article EN 2010-11-01

If the operating system could be specialized for every application, many applications would run faster. For example, Java virtual machines (JVMs) provide their own threading model and memory protection, so general-purpose implementations of these abstractions are redundant. However, traditional means transforming existing systems into difficult to adopt because they require replacing entire system. This paper describes Libra, an execution environment IBM's J9 JVM. Libra does not replace...

10.1145/1254810.1254817 article EN 2007-06-13

Linux®, or more specifically, the Linux API, plays a key role in HPC computing. Even for extreme-scale computing, known and familiar API is required production machines. However, an off-the-shelf distribution faces challenges at extreme scale. To date, two approaches have been used to address of providing operating system (OS) In Full-Weight Kernel (FWK) approach, OS, typically Linux, forms starting point, work undertaken remove features from environment so that it will scale up across cores...

10.1145/2612262.2612263 article EN 2014-06-10

Efficient synchronization is important for achieving good performance in parallel programs, especially on large-scale multiprocessors. Most algorithms have been designed to run a dedicated machine, with one application process per processor, and can suffer serious degradation the presence of multiprogramming. Problems arise when running processes block or, worse, busy-wait action part that scheduler has chosen not run. We show these problems are particularly severe scalable based distributed...

10.1145/244764.244765 article EN ACM Transactions on Computer Systems 1997-02-01

Operating system noise is a well-known problem that may limit application scalability on large-scale machines, significantly reducing their performance. Though the well studied, much of previous work has been qualitative. We have developed technique to provide quantitative descriptive analysis for each OS event contributes noise. The mechanism allows us detail all sources through precise kernel instrumentation and provides frequency duration event. Such description gives developers better...

10.1109/ipdps.2011.84 article EN 2011-05-01

Traditionally, there have been two approaches to providing an operating environment for high performance computing (HPC). A Full-Weight Kernel(FWK) approach starts with a general-purpose system and strips it down better scale up across more cores out larger clusters. Light-Weight Kernel (LWK) new thin kernel code base extends its functionality by adding services needed applications. In both cases, the goal is provide end-users scalable HPC reliably run their To achieve this goal, we propose...

10.1109/sbac-pad.2012.14 article EN 2012-10-01

With the growing awareness that individual hardware cores will not continue to produce same level of performance improvement, there is a need develop an integrated approach optimization. In this paper we present paradigm for continuous program optimization (CPO), whereby automatic agents monitor and optimize application system performance. The monitoring data used analyze create models behavior. Using analysis, describe how CPO can improve both underlying system. paradigm, implemented...

10.1109/pact.2005.32 article EN 2005-01-01

Designing and implementing system software so that it scales well on shared-memory multiprocessors (SMMPs) has proven to be surprisingly challenging. To improve scalability, most designers date have focused concurrency by iteratively eliminating the need for locks reducing lock contention. However, our experience indicates locality is just as, if not more, important focusing ultimately leads a more scalable system. In this paper, we describe methodology framework constructing structured...

10.1145/1275517.1275518 article EN ACM Transactions on Computer Systems 2007-08-01

Abstract Integrating palaeoclimatological proxies and historical records, which is necessary to achieve a more complete understanding of climate impacts on past societies, challenging task, often leading unsatisfactory even contradictory conclusions. This has until recently been the case for Italy, heart Roman Empire, during transition between Antiquity Middle Ages. In this paper, we present new high-resolution speleothem data from Apuan Alps (Central Italy). The document period very wet...

10.1007/s10584-021-03043-x article EN cc-by Climatic Change 2021-03-01

Supercomputers and clouds both strive to make a large number of computing cores available for computation. More recently, similar objectives such as low-power, manageability at scale, low cost ownership are driving more converged hardware software. Challenges remain, however, which one is that current cloud infrastructure does not yield the performance sought by many scientific applications. A source loss comes from virtualization network in particular. This paper provides an introduction...

10.1145/1851476.1851534 article EN 2010-06-21

Programming, understanding, and tuning the performance of large multiprocessor systems is challenging. Experts have difficulty achieving good utilization for applications on machines. The task implementing a scalable system such as an operating or database machines even more And importance increasing number cores per chip increases size multiprocessors increases. Crucial to being able understand behavior system. We developed efficient, unified, tracing infrastructure that allows correctness...

10.1145/1048935.1050154 article EN 2003-11-15
Coming Soon ...