- Parallel Computing and Optimization Techniques
- Distributed and Parallel Computing Systems
- Matrix Theory and Algorithms
- Numerical Methods and Algorithms
- Numerical methods for differential equations
- Advanced Data Storage Technologies
- Advanced Control Systems Optimization
- Embedded Systems Design Techniques
- Cryptography and Data Security
- Computer Graphics and Visualization Techniques
- Advanced Optimization Algorithms Research
- Advanced Numerical Methods in Computational Mathematics
- Interconnection Networks and Systems
- Software System Performance and Reliability
- Computational Fluid Dynamics and Aerodynamics
- Reservoir Engineering and Simulation Methods
- Simulation Techniques and Applications
- Complexity and Algorithms in Graphs
- Cloud Computing and Resource Management
- Scientific Computing and Data Management
- Distributed systems and fault tolerance
- Manufacturing Process and Optimization
- Model Reduction and Neural Networks
- Algorithms and Data Compression
- Virtual Reality Applications and Impacts
Technical University of Darmstadt
2016-2025
FH JOANNEUM University of Applied Sciences
2019-2020
Siemens (Austria)
2018-2019
RWTH Aachen University
2004-2014
Innsbruck Medical University
2010
Argonne National Laboratory
1991-2005
FH Aachen
1999-2005
Virtuelle Fabrik (Switzerland)
2005
Cornell University
1985-2003
University of Kaiserslautern
2002
The numerical methods employed in the solution of many scientific computing problems require computation derivatives a function f R n →R m . Both accuracy and computational requirements derivative are usually critical importance for robustness speed solution. Automatic Differentiation FORtran (ADIFOR) is source transformation tool that accepts Fortran 77 code writes portable derivatives. In contrast to previous approaches, ADIFOR views automatic differentiation as problem. employs data...
The goal of the LAPACK project is to design and implement a portable linear algebra library for efficient use on variety high-performance computers. based widely used LINPACK EISPACK packages solving equations, eigenvalue problems, least-squares but extends their functionality in number ways. major methodology making algorithms run faster restructure them perform block matrix operations (e.g., matrix-matrix multiplication) inner loops. These may be optimized exploit memory hierarchy specific...
Numerical codes that calculate not only a result, but also the derivatives of variables with respect to each other, facilitate sensitivity analysis, inverse problem solving, and optimization. The paper considers how Adifor 2.0, which won 1995 Wilkinson Prize for Software, can automatically differentiate complicated Fortran code much faster than programmer do it by hand. system has three main components: AdiFor preprocessor, ADIntrinsics exception-handling system, SparsLinC library.
A new way to represent products of Householder matrices is given that makes a typical matrix algorithm rich in matrix-matrix multiplication. This very desirable multiplication the operation choice for an increasing number important high performance computers. We tested representation by using it compute QR factorization on FPS-164/MAX. Preliminary results indicate efficient organize computations.
In scientific computing, we often require the derivatives ∂f/∂x of a function f expressed as program with respect to some input parameter(s) x, say. Automatic Differentiation (AD) techniques augment derivative computation by applying chain rule calculus elementary operations in an automated fashion. This article introduces ADIC (Automatic C), new AD tool for ANSI-C programs. is currently only that employs source-to-source transformation approach; is, it takes C code and produces computes...
The goal of the LAPACK project is to design and implement a portable linear algebra library for efficient use on variety high-performance computers. based widely used LINPACK EISPACK packages solving equations, eigenvalue problems, least-squares but extends their functionality in number ways. major methodology making algorithms run faster restructure them perform block matrix operations (e.g., matrix-matrix multiplication) inner loops. These may be optimized exploit memory hierarchy specific...
We develop algorithms and implementations for computing rank-revealing QR (RRQR) factorizations of dense matrices. First, we an efficient block algorithm approximating RRQR factorization, employing a windowed version the commonly used Golub pivoting strategy, aided by incremental condition estimation. Second, efficiently implementable variants guaranteed reliable triangular matrices originally suggested Chandrasekaran Ipsen Pan Tang. suggest algorithmic improvements with respect to...
Derivatives of mathematical functions play a key role in various areas numerical and technical computing. Many these computations are done MATLAB, popular environment for computing providing engineers scientists with capabilities computing, analysis, visualization, algorithmic development. For written the MATLAB language, novel software tool is proposed to automatically transform given program into another capable not only original function but also user-specified derivatives that function....
In scientific computing, we often require the derivatives ∂f/∂x of a function f expressed as program with respect to some input parameter(s) x, say. Automatic Differentiation (AD) techniques augment derivative computation by applying chain rule calculus elementary operations in an automated fashion. This article introduces ADIC (Automatic C), new AD tool for ANSI-C programs. is currently only that employs source-to-source transformation approach; is, it takes C code and produces computes...
The authors develop an algorithm for adaptively estimating the noise subspace of a data matrix, as is required in signal processing applications employing 'signal subspace' approach. estimated using rank-revealing QR factorization instead more expensive singular value or eigenvalue decompositions. Using incremental condition estimation to monitor smallest values triangular matrices, can update inexpensively when new rows are added and old deleted. Experiments demonstrate that approach...
The QR factorization with column pivoting (QRP), originally suggested by Golub [Numer. Math., 7 (1965), 206--216], is a popular approach to computing rank-revealing factorizations. Using Level 1 BLAS, it was implemented in LINPACK, and, using 2 LAPACK. While the BLAS version delivers superior performance general, may result worse for large matrix sizes due cache effects. We introduce modification of QRP algorithm which allows use 3 kernels while maintaining numerical behavior LINPACK and...
This paper introduces a new technique for estimating the smallest singular value, and hence condition number, of dense triangular matrix as it is generated one row or column at time. It also shown how this estimator can be interpreted trying to approximate secular equation with simpler rational function. While construct examples where fails, numerical experiments demonstrate that despite its small computational cost, produces reliable estimates. Also given an example shows advantage...
We present a software toolbox for symmetric band reduction via orthogonal transformations, together with testing and timing program. The contains drivers computational routines the of full matrices to banded form narrower or tridiagonal form, optional accumulation as well repacking storage rearrangement. functionality calling sequences are described, detailed discussion “control” parameters that allow adaptation codes particular machine matrix characteristics. also briefly describe program...
We develop an algorithmic framework for reducing the bandwidth of symmetric matrices via orthogonal similarity transformations. This includes reduction full to banded or tridiagonal form and narrower form, possibly in multiple steps. Our leads algorithms that require fewer floating-point operations than do standard algorithms, if only eigenvalues are required. In addition, it allows space-time tradeoffs enables increases use blocked
Automatic differentiation (AD) is a technique for automatically augmenting computer programs with statements the computation of derivatives. This article discusses application automatic to numerical integration algorithms ordinary differential equations (ODEs), in particular, ramifications fact that AD applied not only solution such an algorithm, but procedure itself. subtle issue can lead surprising results when tools are variable-stepsize, variable-order ODE integrators. The final time...
The computation of large sparse Jacobian matrices is required in many important large-scale scientific problems. Three approaches to computing such are considered: hand-coding, difference approximations, and automatic differentiation using the ADIFOR (automatic Fortran) tool. authors compare numerical reliability computational efficiency these on applications from MINPACK-2 test problem collection. conclusion that method choice, leading results as accurate hand-coded derivatives, while at...
This article describes a suite of codes as well associated testing and timing drivers for computing rank-revealing QR (RRQR) factorizations dense matrices. The main contribution is an efficient block algorithm approximating RRQR factorization, employing windowed version the commonly used Golub pivoting strategy improved versions algorithms triangular matrices orginally suggersted by Chandrasekaran Ipsen Pan Tang, respectively, We highlight usage features these codes.
In this paper, we assess the practicability of Hash Sieve, a recently proposed sieving algorithm for Shortest Vector Problem (SVP) on lattices, multi-core shared memory systems. To end, devised parallel implementation that scales well, and is based probable lock-free system to handle concurrency. The system, implemented with spin-locks, in turn CAS operations, becomes likely mechanism, since threads block only when strictly required chances are they not block. With our implementation, were...