Michael Voss

ORCID: 0009-0003-3904-0712
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Parallel Computing and Optimization Techniques
  • Distributed and Parallel Computing Systems
  • Cloud Computing and Resource Management
  • Advanced Data Storage Technologies
  • Embedded Systems Design Techniques
  • Software System Performance and Reliability
  • Scientific Computing and Data Management
  • Algorithms and Data Compression
  • Radiation Dose and Imaging
  • Lung Cancer Diagnosis and Treatment
  • Appendicitis Diagnosis and Management
  • Interconnection Networks and Systems
  • Medical Imaging Techniques and Applications
  • Infrared Thermography in Medicine
  • Distributed systems and fault tolerance
  • Diverticular Disease and Complications
  • Software Testing and Debugging Techniques
  • Logic, programming, and type systems
  • Criminal Law and Policy
  • Gastrointestinal disorders and treatments
  • ICT Impact and Policies
  • Radiomics and Machine Learning in Medical Imaging
  • Digital Radiography and Breast Imaging
  • Scheduling and Optimization Algorithms
  • Reinforcement Learning in Robotics

Intel (United States)
2005-2023

Koo & Associates International (United States)
2019

DePaul University
2019

University of South Florida
2018

St. Joseph’s Healthcare Hamilton
2004-2016

University Medical Centre Mannheim
2007

Heidelberg University
2007

University Hospital Heidelberg
2007

University of Toronto
2002-2005

Purdue University West Lafayette
1997-2002

Compile-time optimization is often limited by a lack of target machine and input data set knowledge. Without this information, compilers may be forced to make conservative assumptions preserve correctness avoid performance degradation. In order cope with information at compile-time, adaptive dynamic systems can used perform runtime when complete knowledge parameters available. This paper presents compiler-supported high-level system. Users describe, in domain specific language, optimizations...

10.1145/379539.379583 article EN 2001-06-18

Intelreg Threading Building Blocks (Intelreg TBB) is a C++ library for parallel programming. Its templates generic loops are built upon nested parallelism and work-stealing scheduler. This paper discusses optimizations where the high-level algorithm inspects or biases stealing. Two discussed in detail. The first dynamically optimizes grain size based on observed second improves prior work that exploits cache locality by biased shows task stealing environment, deferring spawning can improve...

10.1109/ipdps.2008.4536188 article EN Proceedings - IEEE International Parallel and Distributed Processing Symposium 2008-04-01

The clinical diagnosis of acute colonic diverticulitis (ACD) can be difficult and ultrasonography by experts is valuable in establishing the diagnosis. This prospective observational trial aimed to assess diagnostic accuracy value performed routinely surgical residents training. course 187 unselected consecutive patients admitted with suspected ACD was studied prospectively. Patients who had surgery for generalized peritonitis were excluded, leaving 143 evaluation. Ultrasonographic findings...

10.1046/j.1365-2168.1997.02604.x article EN British journal of surgery 1997-03-01

The paper mentions that the Intel Threading Building Blocks is a key component of Parallel Blocks. This widely used C++ template library helps developers achieve well-performing modular parallel programs in multiprogrammed environments.

10.1109/ms.2011.12 article EN IEEE Software 2010-12-22

Dynamic program optimization offers performance improvements far beyond those possible with traditional compile-time optimization. These gains are due to the ability exploit both architectural and input data set characteristics that unknown prior execution time. In this paper, we propose a novel framework for dynamic optimization, ADAPT (Automated De-coupled Adaptive Program Transformation), builds on strengths of existing approaches. The key our is de-coupling compilation new code variants...

10.1109/icpp.2000.876107 article EN 2002-11-07

Hyperthreaded (HT) and simultaneous multithreaded (SMT) processors are now available in commodity workstations servers. This technology is designed to increase throughput by executing multiple concurrent threads on a single physical processor. These share the processor's functional units on-chip memory hierarchy an attempt make better use of idle resources. Most OpenMP applications have been written assuming symmetric multiprocessor (SMP), not SMT, model. Threads same processor interactions...

10.1109/ipdps.2005.386 article EN 2005-04-19

If parallelism can be successfully exploited in a program, significant reductions execution time achieved. However, if sections of the code are dominated by parallel overheads, overall program performance degrade. We propose framework, based on an inspector-executor model, for identifying loops that overheads and dynamically serializing these loops. implement this framework Polaris parallelizing compiler evaluate two portable methods classifying as profitable or unprofitable. show six...

10.1109/ipps.1999.760440 article EN 2003-01-20

Compile-time optimization is often limited by a lack of target machine and input data set knowledge. Without this information, compilers may be forced to make conservative assumptions preserve correctness avoid performance degradation. In order cope with information at compile-time, adaptive dynamic systems can used perform runtime when complete knowledge parameters available. This paper presents compiler-supported high-level system. Users describe, in domain specific language, optimizations...

10.1145/568014.379583 article EN ACM SIGPLAN Notices 2001-06-18

We present our effort to provide a comprehensive parallel programming environment for the OpenMP directive language. This includes methodology model and set of tools (Ursa Minor InterPol) that support this methodology. Our toolset provides automated interactive assistance programmers in time‐consuming tasks proposed The features provided by include performance program structure visualization, optimization, modeling, advising finding correcting problems. presented evaluation demonstrates...

10.1155/2001/195437 article EN cc-by Scientific Programming 2001-01-01

Dynamic program optimization allows programs to be generated that are highly tuned for a given environment and input data set. Optimization techniques can applied re-applied as machine characteristics discovered change. In most dynamic compilation frameworks, the time spent in code generation must minimized since it is directly reflected total execution time. We propose generic framework remote mitigates this need. A local optimizer thread monitors executes selects sections should optimized....

10.1145/351397.351413 article EN 2000-01-01
Coming Soon ...