- Caching and Content Delivery
- Peer-to-Peer Network Technologies
- Advanced Data Storage Technologies
- Parallel Computing and Optimization Techniques
- Cloud Computing and Resource Management
- Green IT and Sustainability
- Web Data Mining and Analysis
- Interconnection Networks and Systems
- Recommender Systems and Techniques
- Distributed systems and fault tolerance
- Web visibility and informetrics
- Digital Marketing and Social Media
- Network Traffic and Congestion Control
- Opportunistic and Delay-Tolerant Networks
- Image and Video Quality Assessment
- Experimental Learning in Engineering
- Distributed and Parallel Computing Systems
- Embedded Systems Design Techniques
- Software System Performance and Reliability
- Real-Time Systems Scheduling
- Geography and Education Methods
- ICT Impact and Policies
- Geographic Information Systems Studies
- IoT and Edge/Fog Computing
- Age of Information Optimization
Universitat Politècnica de València
2010-2022
Universitat de València
2007
Natural History Museum
1995
Ateneo de Davao University
1995
Every dazzling announcement of a new smart phone or trendy digital device is the prelude to more tons electronic waste (e-waste) being produced. This e-waste, scrap, often improperly added common garbage, rather than separated into suitable containers that facilitate recovery toxic materials and valuable metals. We are beginning become aware problems e-waste can generate our health environment. However, most us still not motivated enough take an active part in reversing situation. The aim...
Web prefetching is a technique that has been researched for years to reduce the latency perceived by users. For this purpose, several architectures have used, but no comparative study performed identify best architecture dealing with prefetching. This paper analyzes impact of focusing on limits reducing user's latency. To end, factors constrain predictive power each are analyzed and these theoretical quantified. Experimental results show element locate single prediction engine proxy, whose...
Web prefetching is one of the techniques proposed to reduce user's perceived latencies in World Wide Web. The spatial locality shown by accesses makes it possible predict future based on previous ones. A engine uses these predictions prefetch objects before user demands them. existing prediction algorithms achieved an acceptable performance when they were but high increase amount embedded per page has reduced their effectiveness current In this paper we show that most made are useless...
Recent cache-memory research has focused on approaches that split the first-level data cache into two independent subcaches. The authors introduce a methodology for helping designers devise splitting schemes and survey representative set of published schemes.
The popularity of Web objects, and by extension the sites, besides appearance clear footprints in user's accesses that show a considerable spatial locality, make possible to predict future based on current ones. This fact permits implement also prefetching techniques architecture order reduce latency perceived users. Although open literature presents some approaches this sense, huge variety algorithms, different scenarios conditions where they are applied very difficult compare performance...
The enormous potential of locality based strategies like caching and prefetching to improve Web performance motivates us propose a novel global framework for evaluation in scenarios where different parts the architecture interact. Unlike existing proposals our approach is fast flexible tool that allows represent faithfully behaviour each element order study, reproduce, evaluate design decrease userpsilas perceived latency when surfing Web.
This paper presents the Referrer Graph (RG) web prediction algorithm as a low-cost solution to predict next user accesses. RG is aimed at being used in real system with prefetching capabilities without degrading its performance. The learns from accesses and builds Markov model. These kinds kind of algorithms use sequence make predictions. Unlike previous model based proposals, differentiates dependencies objects same page different pages by using object URI referrer each request. permits us...
This work presents a new hardware cache management approach for improving the hit ratio and reducing bus traffic. Increasing L1 is crucial aspect of obtaining good performance with current processors. The proposed also increases overall (L1 plus L2) ratio, especially in multiprocessor systems, where latencies are low. focuses systems forth kind miss (the coherence miss) utilization problem appear; however, model can be applied to uniprocessor systems. Our organization thus reduces...
Prefetching is an interesting technique for improving web performance by reducing the user-perceived latency when surfing web. Nevertheless, due to its speculative nature, prefetching can increase network traffic and server load. This could negatively affect overall system decrease quality of service. To minimize maintain under control these adverse effects, in this paper we propose intelligent mechanism that dynamically adjusts aggressiveness algorithm at side. end, also a estimation model...
Cache memories represent a core topic in all computer organization and architecture courses offered at universities around the world. As consequence, educational proposals textbooks address important efforts to this topic. A valuable pedagogical help when studying cache is perform exercises based on simple algorithms, which allow identification of accesses, for instance, program accessing elements an array. These exercises, referred as code-based have good acceptance among instructors...
Proxy caches have become an important mechanism to reduce latencies. Efficient management techniques for proxy which exploits Web-objects inherent characteristics are essential key reach good performance. One segment of the replacement algorithms being applied today multikey that use several or object decide objects must be replaced. This feature is not considered in most current simulators. In this paper we propose a proxy-cache platform check performance Web based on and algorithms. The...
The popularity of Web objects, and by extension the sites, besides appearance clear footprints in user's accesses that show a considerable spatial locality, make possible to predict future based on current ones. This fact permits implement also prefetching techniques architecture order reduce latency perceived users. Although open literature presents some approaches this sense, huge variety algorithms, different scenarios conditions where they are applied very difficult compare performance...
Despite the wide and intensive research efforts focused on Web prediction prefetching techniques aimed to reduce user's perceived latency, few attempts implement use them in real environments have been done, mainly due their complexity supposed limitations that low user available bandwidths imposed years ago. Nevertheless, current open a new scenario for becomes again an interesting option improve web performance. This paper presents Delfos, framework perform predictions environment tries...
The increasing popularity of web applications has introduced a new paradigm where users are no longer passive consumers but they become active contributors to the Web, specially in contexts social networking, blogs, wikis or e-commerce. In this paradigm, contents and services even more dynamic, which consequently increases level dynamism user's behavior. Moreover, trend is expected rise incoming Web. This major adversity define model representative workload, fact, characteristic not fully...