Lukas Mauch

ORCID: 0000-0001-9212-899X
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Advanced Neural Network Applications
  • Domain Adaptation and Few-Shot Learning
  • Neural Networks and Applications
  • Advanced Image and Video Retrieval Techniques
  • Machine Learning and Data Classification
  • Adversarial Robustness in Machine Learning
  • Natural Language Processing Techniques
  • Multimodal Machine Learning Applications
  • Advanced MRI Techniques and Applications
  • Medical Imaging Techniques and Applications
  • Model Reduction and Neural Networks
  • Smart Grid Energy Management
  • Power Line Communications and Noise
  • Radiomics and Machine Learning in Medical Imaging
  • Machine Learning and Algorithms
  • Gaussian Processes and Bayesian Inference
  • Image and Signal Denoising Methods
  • IoT-based Smart Home Systems
  • Generative Adversarial Networks and Image Synthesis
  • Topic Modeling
  • Advanced Multi-Objective Optimization Algorithms
  • Advanced Electrical Measurement Techniques
  • Healthcare Technology and Patient Monitoring
  • Human Pose and Action Recognition
  • Industrial Vision Systems and Defect Detection

Stuttgart University of Applied Sciences
2024

University of Stuttgart
2012-2020

Signal Processing (United States)
2018

Princeton University
2018

This paper presents a new approach for supervised power disaggregation by using deep recurrent long short term memory network. It is useful to extract the signal of one dominant appliance or any subcircuit from aggregate signal. To train network, measurement target in addition total during same time period required. The method supervised, but less restrictive practice since submetering an important feasible. main advantages this are: a) also applicable variable load and not restricted on-off...

10.1109/globalsip.2015.7418157 article EN 2015-12-01

Nowadays, renewable energies play an important role to cover the increasing power demand in accordance with environment protection. Solar energy, produced by large solar farms, is a fast growing technology offering environmental friendly supply. However, its efficiency suffers from cell defects occurring during operation life or caused incidents. These can be made visible using electroluminescence (EL) imaging. A manual classification of these EL images very time and cost demanding prone...

10.23919/eusipco.2018.8553025 article EN 2021 29th European Signal Processing Conference (EUSIPCO) 2018-09-01

Efficient deep neural network (DNN) inference on mobile or embedded devices typically involves quantization of the parameters and activations. In particular, mixed precision networks achieve better performance than with homogeneous bitwidth for same size constraint. Since choosing optimal bitwidths is not straight forward, training methods, which can learn them, are desirable. Differentiable straight-through gradients allows to quantizer's using gradient methods. We show that a suited...

10.48550/arxiv.1905.11452 preprint EN other-oa arXiv (Cornell University) 2019-01-01

This paper presents a new supervised approach to extract the power trace of individual loads from single channel aggregate signals in non-intrusive load monitoring (NILM) systems. Recent approaches this source separation problem are based on factorial hidden Markov models (FHMM). Drawbacks needed knowledge HMM for all loads, what is infeasible large buildings, and combinatorial complexity. Our trains with two emission probabilities, one be extracted other signal. A Gaussian distribution used...

10.1109/icassp.2016.7472104 article EN 2016-03-01

We systematically investigate multi-token prediction (MTP) capabilities within LLMs pre-trained for next-token (NTP). first show that such models inherently possess MTP via numerical marginalization over intermediate token probabilities, though performance is data-dependent and improves with model scale. Furthermore, we explore the challenges of integrating heads into frozen find their hidden layers are strongly specialized NTP, making adaptation non-trivial. Finally, while joint training...

10.48550/arxiv.2502.09419 preprint EN arXiv (Cornell University) 2025-02-13

The problem of identifying end-use electrical appliances from their individual consumption profiles, known as the appliance identification problem, is a primary stage in both Non-Intrusive Load Monitoring (NILM) and automated plug-wise metering. Therefore, has received dedicated studies with various electric signatures, classification models, evaluation datasets. In this paper, we propose neural network ensembles approach to address using high resolution measurements. models are trained on...

10.48550/arxiv.1802.06963 preprint EN other-oa arXiv (Cornell University) 2018-01-01

Operating deep neural networks (DNNs) on devices with limited resources requires the reduction of their memory as well computational footprint. Popular methods are network quantization or pruning, which either reduce word length parameters remove weights from if they not needed. In this article we discuss a general framework for call `Look-Up Table Quantization` (LUT-Q). For each layer, learn value dictionary and an assignment matrix to represent weights. We propose special solver combines...

10.1109/jstsp.2020.3005030 article EN IEEE Journal of Selected Topics in Signal Processing 2020-05-01

Deep neural networks (DNN) are powerful models for many pattern recognition tasks, yet they tend to have layers and neurons resulting in a high computational complexity. This limits their application high-performance computing platforms. In order evaluate trained DNN on lower-performance platform like mobile or embedded device, model reduction techniques which shrink the network size reduce number of parameters without considerable performance degradation highly desirable. this paper, we...

10.1109/icassp.2017.7952583 article EN 2017-03-01

In recent years, the rapid evolution of computer vision has seen emergence various foundation models, each tailored to specific data types and tasks. this study, we explore adaptation these models for few-shot semantic segmentation. Specifically, conduct a comprehensive comparative analysis four prominent models: DINO V2, Segment Anything, CLIP, Masked AutoEncoders, straightforward ResNet50 pre-trained on COCO dataset. We also include 5 methods, ranging from linear probing fine tuning. Our...

10.48550/arxiv.2401.11311 preprint EN cc-by-nc-nd arXiv (Cornell University) 2024-01-01

In this paper we examine the use of deep convolutional neural networks for semantic image segmentation, which separates an input into multiple regions corresponding to predefined object classes. We encoder-decoder structure and aim improve it in convergence speed segmentation accuracy by adding shortcuts between network layers. Besides, investigate how extend already trained model other new propose a strategy class extension with only little training data labels. experiments two street scene...

10.1109/ipta.2016.7821005 article EN 2016-12-01

Deep neural networks (DNN) achieve very good performance in many machine learning tasks, but are computationally demanding. Hence, there is a growing interest on model reduction methods for DNN. Model allows to reduce the number of computations needed evaluate trained DNN without significant degradation. In this paper, we study layerwise that each layer independently. We consider pruning and low-rank approximation method reduction. Up now, often constant factor used all layers. show...

10.1109/icassp.2017.7952549 article EN 2017-03-01

The estimation of the generalization error classifiers often relies on a validation set. Such set is hardly available in few-shot learning scenarios, highly disregarded shortcoming field. In these it common to rely features extracted from pre-trained neural networks combined with distance-based such as nearest class mean. this work, we introduce Gaussian model feature distribution. By estimating parameters model, are able predict new classification tasks few samples. We observe that accurate...

10.23919/eusipco58844.2023.10289951 article EN 2023-09-04

Magnetic resonance (MR) plays an important role in medical imaging. It can be flexibly tuned towards different applications for deriving a meaningful diagnosis. However, its long acquisition times and flexible parametrization make it on the other hand prone to artifacts which obscure underlying image content or misinterpreted as anatomy. Patient-induced motion are still one of major extrinsic factors degrade quality. In this work, automatic reference-free artifact detection, including...

10.1109/icassp.2018.8462414 article EN 2018-04-01

The deep convolutional neural network (CNN) has recently shown state-of-the-art performance in many image processing tasks. We examine the use of CNN for semantic segmentation, which separates an input into multiple regions corresponding to predefined object classes. follow most successful CNN-based segmentation recent years and focus on study contextual aspects. To context-awareness, we manually modify context images effects results. experiments through systematic changes show that model is...

10.1117/1.jei.27.5.051223 article EN Journal of Electronic Imaging 2018-05-14

In this paper, we propose a new layerwise pruning method to reduce the number of computations needed evaluate convolutional neural networks (CNN) after training. This least-squares (LS) based improves state-of-the-art methods as it solves both problems, how select feature maps be pruned and adapt remaining parameters in kernel tensor compensate introduced errors, jointly. Therefore, our utilizes correlations between input structure tensor. experiments, show that high reduction rates with...

10.1109/ssp.2018.8450814 article EN 2018-06-01

In clinical diagnostic, magnetic resonance imaging (MRI) is a valuable and versatile tool. The acquisition process is, however, susceptible to image distortions (artifacts) which may lead degradation of quality. Automated reference-free localization quantification artifacts by employing convolutional neural networks (CNNs) promising way for early detection artifacts. Training relies on high amount expert labeled data time-demanding process. Previous studies were based global labels, i.e....

10.23919/apsipa.2018.8659515 article EN 2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC) 2018-11-01

Machine learning models are advancing circuit design, particularly in analog circuits. They typically generate netlists that lack human interpretability. This is a problem as designers heavily rely on the interpretability of diagrams or schematics to intuitively understand, troubleshoot, and develop designs. Hence, integrate domain knowledge effectively, it crucial translate ML-generated into interpretable quickly accurately. We propose Schemato, large language model (LLM) for...

10.48550/arxiv.2411.13899 preprint EN arXiv (Cornell University) 2024-11-21

We consider the problem of zero-shot one-class visual classification. In this setting, only label target class is available, and goal to discriminate between positive negative query samples without requiring any validation example from task. propose a two-step solution that first queries large language models for visually confusing objects then relies on vision-language pre-trained (e.g., CLIP) perform By adapting large-scale vision benchmarks, we demonstrate ability proposed method...

10.48550/arxiv.2404.00675 preprint EN arXiv (Cornell University) 2024-03-31

Handling distribution shifts from training data, known as out-of-distribution (OOD) generalization, poses a significant challenge in the field of machine learning. While pre-trained vision-language model like CLIP has demonstrated remarkable zero-shot performance, further adaptation to downstream tasks leads undesirable degradation for OOD data. In this work, we introduce Sparse Adaptation Fine-Tuning (SAFT), method that prevents fine-tuning forgetting general knowledge model. SAFT only...

10.48550/arxiv.2407.03036 preprint EN arXiv (Cornell University) 2024-07-03
Coming Soon ...