Ludmila Cherkasova

ORCID: 0000-0002-9333-4901
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Cloud Computing and Resource Management
  • Software System Performance and Reliability
  • Caching and Content Delivery
  • Peer-to-Peer Network Technologies
  • Advanced Data Storage Technologies
  • Distributed and Parallel Computing Systems
  • Parallel Computing and Optimization Techniques
  • Network Traffic and Congestion Control
  • Distributed systems and fault tolerance
  • IoT and Edge/Fog Computing
  • Petri Nets in System Modeling
  • Formal Methods in Verification
  • Software-Defined Networks and 5G
  • Advanced Queuing Theory Analysis
  • Blockchain Technology Applications and Security
  • Business Process Modeling and Analysis
  • Interconnection Networks and Systems
  • Embedded Systems Design Techniques
  • Graph Theory and Algorithms
  • Service-Oriented Architecture and Web Services
  • Advanced Database Systems and Queries
  • Scheduling and Optimization Algorithms
  • Age of Information Optimization
  • Logic, programming, and type systems
  • Real-Time Systems Scheduling

American Rock Mechanics Association
2018-2023

TTTech Computertechnik (Austria)
2022

TechLab (United States)
2022

Siemens (Germany)
2022

AT&T (United States)
2022

Orange (France)
2021

University of California, San Diego
2021

Université de Lorraine
2021

Centre Inria de l'Université de Lorraine
2021

Institut national de recherche en informatique et en automatique
2021

MapReduce and Hadoop represent an economically compelling alternative for efficient large scale data processing advanced analytics in the enterprise. A key challenge shared clusters is ability to automatically tailor control resource allocations different applications achieving their performance goals. Currently, there no job scheduler environments that given a completion deadline, could allocate appropriate amount of resources so it meets required Service Level Objective (SLO). In this...

10.1145/1998582.1998637 article EN 2011-06-14

10.5555/1515984.1516011 article EN ACM/IFIP/USENIX international conference on Middleware 2006-11-01

The primary motivation for enterprises to adopt virtualization technologies is create a more agile and dynamic IT infrastructure -- with server consolidation, high resource utilization, the ability quickly add adjust capacity on demand while lowering total cost of ownership responding effectively changing business conditions. However, effective management virtualized environments introduces new unique requirements, such as dynamically resizing migrating virtual machines (VMs) in response...

10.1145/1330555.1330556 article EN ACM SIGMETRICS Performance Evaluation Review 2007-09-01

Advances in virtualization technology are enabling the creation of resource pools servers that permit multiple application workloads to share each server pool. Understanding nature enterprise is crucial properly designing and provisioning current future services such pools. This paper considers issues workload analysis, performance modeling, capacity planning. Our goal automate efficient use when hosting large numbers services. We a trace based approach for management relies on i)...

10.1109/iiswc.2007.4362193 article EN 2007-09-01

We consider a new, session-based workload for measuring web server performance. define session as sequence of client's individual requests. Using simulation model, we show that an overloaded can experience severe loss throughput measured number completed sessions compared against the in requests per second. Moreover, statistical analysis reveals discriminates longer sessions. For e-commerce retail sites, are typically ones would result purchases, so they precisely which companies want to...

10.1109/tc.2002.1009151 article EN IEEE Transactions on Computers 2002-06-01

The multi-tier implementation has become the industry standard for developing scalable client-server enterprise applications. Since these applications are performance sensitive, effective models dynamic resource provisioning and delivering quality of service to critical. Workloads in such environments characterized by client sessions interdependent requests with changing transaction mix load over time, making model adaptivity observed workload changes a critical requirement effectiveness. In...

10.1109/icac.2007.1 article EN 2007-06-01

The continued growth of the World-Wide Web and emergence new end-user technologies such as cable modems necessitate use proxy caches to reduce latency, network traffic server loads. Current utilize simple replacement policies determine which files retain in cache. We a trace client requests busy an ISP environment evaluate performance several existing two new, parameterless that we introduce this paper. Finally, Virtual Caches, approach for improving cache multiple metrics simultaneously.

10.1145/346000.346003 article EN ACM SIGMETRICS Performance Evaluation Review 2000-03-01

10.5555/1496950.1496973 article EN ACM/IFIP/USENIX international conference on Middleware 2008-12-01

Recent advances in hardware and software virtualization offer unprecedented management capabilities for the mapping of virtual resources to physical resources. It is highly desirable further create a "service hosting abstraction" that allows application owners focus on service level objectives (SLOs) their applications. This calls resource solution achieves SLOs many applications response changing data center conditions hides complexity from both operators. In this paper, we describe an...

10.1109/icac.2008.32 article EN International Conference on Autonomic Computing 2008-06-01

The consolidation of multiple servers and their workloads aims to minimize the number needed thereby enabling efficient use server power resources. At same time, applications participating in scenarios often have specific quality service requirements that need be supported. To evaluate which can consolidated we employ a trace-based approach determines near optimal workload placement provides qualities service. However, chosen is based on past demands may not perfectly predict future demands....

10.1109/dsn.2008.4630101 article EN 2008-01-01

Understanding the nature of media server workloads is crucial to properly designing and provisioning current future services. The main issue we address in this paper workload analysis today's enterprise servers. This aims establish a set properties specific compare them well-known related observations about web workloads. We partition two groups: static temporal. While provide more traditional general characteristics underlying fileset quantitative client accesses those files (independent...

10.1109/tnet.2004.836125 article EN IEEE/ACM Transactions on Networking 2004-10-01

Automated tools for understanding application behavior and its changes during the life-cycle are essential many performance analysis debugging tasks. Application issues have an immediate impact on customer experience satisfaction. A sudden slowdown of enterprise-wide can effect a large population customers, lead to delayed projects ultimately result in company financial loss. We believe that online modeling should be part routine monitoring. Early, informative warnings significant help...

10.1109/dsn.2008.4630116 article EN 2008-01-01

Advances in server, network, and storage virtualization are enabling the creation of resource pools servers that permit multiple application workloads to share each server pool. This paper proposes evaluates aspects a capacity management process for automating efficient use such when hosting large numbers services. We trace based approach relies on i) definition required capacity, ii) characterization workload demand patterns, iii) generation synthetic predict future demands iv) placement...

10.1109/icws.2007.62 article EN 2007-07-01

Large-scale MapReduce clusters that routinely process petabytes of unstructured and semi-structured data represent a new entity in the changing landscape clouds. A key challenge is to increase utilization these clusters. In this work, we consider subset production workload consists jobs with no dependencies. We observe order which are executed can have significant impact on their overall completion time cluster resource utilization. Our goal automate design job schedule minimizes (makespan)...

10.1109/mascots.2012.12 article EN 2012-08-01

Next-generation non-volatile memory (NVM) technologies, such as phase-change and memristors, can enable computer systems infrastructure to continue keeping up with the voracious appetite of data-centric applications for large, cheap, fast storage. Persistent has emerged a promising approach accessing emerging byte-addressable through processor load/store instructions. Due lack commercially available NVM, system software researchers have mainly relied on emulation model persistent...

10.1145/2814576.2814806 article EN 2015-11-24

Hadoop and the associated MapReduce paradigm, has become de facto platform for cost-effective analytics over "Big Data". There is an increasing number of applications with live business intelligence that require completion time guarantees. In this work, we introduce analyze a set complementary mechanisms enhance workload management decisions processing jobs deadlines. The three consider are following: 1) policy job ordering in queue; 2) mechanism allocating tailored map reduce slots to each...

10.1109/noms.2012.6212006 article EN 2012-04-01

Many companies start using Hadoop for advanced data analytics over large datasets. While a traditional cluster deployment assumes homogeneous cluster, many enterprise clusters are grown incrementally time, and might have variety of different servers in the cluster. The nodes' heterogeneity represents an additional challenge efficient job management. Due to resource heterogeneity, it is often unclear which resources introduce inefficiency bottlenecks, how such should be configured optimized....

10.1109/cloud.2013.107 article EN 2013-06-01
Coming Soon ...