- Mathematical Approximation and Integration
- Mathematical functions and polynomials
- Probabilistic and Robust Engineering Design
- Numerical Methods and Algorithms
- Machine Learning and Algorithms
- Neural Networks and Applications
- Iterative Methods for Nonlinear Equations
- Image and Signal Denoising Methods
- Numerical methods in inverse problems
- Matrix Theory and Algorithms
- Statistical and numerical algorithms
- Approximation Theory and Sequence Spaces
- Statistical Methods and Inference
- Advanced Optimization Algorithms Research
- Advanced Mathematical Modeling in Engineering
- Digital Filter Design and Implementation
- Analytic Number Theory Research
- Stochastic processes and financial applications
- Mathematical Analysis and Transform Methods
- Advanced Harmonic Analysis Research
- Computability, Logic, AI Algorithms
- Control Systems and Identification
- Sparse and Compressive Sensing Techniques
- Advanced Numerical Methods in Computational Mathematics
- Statistical Distribution Estimation and Applications
University of Warsaw
2012-2023
University of Warmia and Mazury in Olsztyn
2002
Trinity House
1996
Malaysian Society of Nephrology
1993
Friedrich-Alexander-Universität Erlangen-Nürnberg
1992
This paper studies the question of lower bounds on number neurons and examples necessary to program a given task into feedforward neural networks. We introduce notion information complexity network complement that complexity. Neural deals with for resources (numbers neurons) needed by perform within tolerance. Information measures (i.e. examples) about desired input–output function. study interaction two complexities, so building then programming feed-forward nets tasks. show something...
Consider approximating functions based on a finite number of their samples. We show that adaptive algorithms are much more powerful than nonadaptive ones when dealing with piecewise smooth functions. More specifically, let $F_r^1$ be the class scalar $f:[0,T]\to \mathbb {R}$ whose derivatives order up to $r$ continuous at any point except for one unknown singular point. provide an algorithm $\mathcal {A}_n^\textrm {ad}$ uses most $n$ samples $f$ and worst case $L^p$ error ($1\le p<\infty$)...
We present a novel theoretical approach to the analysis of adaptive quadratures and Simpson in particular which leads construction new algorithm for automatic integration. For given function [Formula: see text] with possible endpoint singularities produces an approximation within asymptotically as text]. Moreover, it is optimal among all quadratures, i.e., needs minimal number evaluations obtain text]-approximation runs time proportional
We study the uniform (Chebyshev) approximation of continuous and piecewise r-smooth ($r\ge2$) functions $f:[0,T]\to\mathbb{R}$ with a finite number singular points. The algorithms use only n function values at adaptively or nonadaptively chosen construct nonadaptive algorithm $\mathcal{A}_{r,n}^{\rm non}$ that, for most one point, enjoys best possible convergence rate $n^{-r}$. This is in sharp contrast to results concerning discontinuous functions. For $r\ge3$, this optimal holds asymptotic...
We find the minimal information cost me(∊) of obtaining an ∊–approximation a linear problem, assuming that available consists noisy (perturbed) values I functionals: Perturbations can be absolute or relative, and are assumed to bounded, each bound dependent on consecutive number functional. determine optimal (up constant) functionals, precisions with which they should obtained, as well best algorithm. The results applied problem recovering functions in s variables r continuous derivatives,...
We study the $\omega$-weighted $L^p$ approximation ($1\le p\le\infty$) of piecewise $r$-smooth functions $f:\mathbb{R}\to\mathbb{R}$. Approximations $\mathcal{A}_nf$ are based on $n$ values $f$ at points that can be chosen adaptively. Assuming weight $\omega$ is Riemann integrable any compact interval and asymptotically decreasing, a necessary condition for error to order $n^{-r}$ $\|\omega\|_{L^{1/\gamma}}<\infty$, where $\gamma=r+1/p$. For class $W_r$ globally functions, this also...
Using the Multivariate Decomposition Method (MDM), we develop an efficient algorithm for approximating ∞-variate integral $$\mathcal{I}_{\infty}(f) = \lim\limits_{d\rightarrow \infty} \int\limits_{\mathcal{R}_{+}^{d}}f(x_{1},\ldots,x_{d},0,0,\ldots)\cdot \exp\left(-\sum\limits_{j=1}^{d} x_{j}\right) \mathrm{d} \mathbf{x} $$ a class of functions f that are once differentiable with respect to each variable. MDM requires algorithms d-variate versions problem. Such provided by Smolyak's...