WiP: Scheduling Multi-Threaded Tasks to Reduce Intra-Task Cache Contention

نویسندگان

  • Corey Tessler
  • Nathan Fisher
چکیده

Research on hard real-time systems and their models has predominately focused upon single-threaded tasks. When multi-threaded tasks are introduced to the classical real-time model the individual threads are treated as distinct tasks, one for each thread. These artificial tasks share the deadline, period, and worst case execution time of their parent task. In the presence of instruction and data caches this model is overly pessimistic, failing to account for the execution time benefit of cache hits when multiple threads of execution share a memory address space. This work takes a new perspective on instruction caches. Treating the cache as a benefit to schedulability for a single task with m threads. To realize the “inter-thread cache benefit” a new scheduling algorithm and accompanying worst-case execution time (WCET) calculation method are proposed. The scheduling algorithm permits threads to execute across conflict free regions, and blocks those threads that would create an unnecessary cache conflict. The WCET bound is determined for the entire set of m threads, rather than treating each thread as a distinct task. Both the scheduler and WCET method rely on the calculation of conflict free regions which are found by a static analysis method that relies on no external information from the system designer. By virtue of this perspective the system’s total execution execution time is reduced and is reflected in a tighter WCET bound compared to the techniques applied to the classical model. Obtaining this tighter bound requires the integration of three typically independent areas: WCET, schedulability, and cacherelated preemption delay analysis.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Data Sharing Conscious Scheduling for Multi-threaded Applications on SMP Machines

Extensive use of multi-threaded applications that run on SMP machines, justifies modifications in thread scheduling algorithms to consider threads’ characteristics in order to improve performance. Current schedulers (e.g. in Linux, AIX) avoid migrating tasks between CPUs unless absolutely necessary. Unwarranted data cache misses occur when tasks that share data run on different CPUs, or are far...

متن کامل

Chip Multiprocessor Performance Modeling for Contention Aware Task Migration and Frequency Scaling

Workload consolidation is usually performed in datacenters to improve server utilization for higher energy efficiency. One of the key issues in workload consolidation is the contention for shared resources. Dynamic voltage and frequency scaling (DVFS) of CPU is another effective technique that has been widely used to trade performance for power reduction. We have found that the degree of resour...

متن کامل

Scheduling Constrained-Deadline Sporadic Parallel Tasks Considering Memory Contention

Consider constrained-deadline sporadic tasks scheduled on a multiprocessor where (i) each task is characterized by its execution requirement, deadline, and minimum inter-arrival time, (ii) each task generates a sequence of jobs, (iii) the execution requirement of a job and its potential for parallel execution is described by one or many stages with a stage having one or many segments and differ...

متن کامل

Static Task Partitioning for Locked Caches in Multi

Growing processing demand on multi-tasking real-time systems can be met by employing scalable multicore architectures. For such environments, locking cache lines for hard real-time systems ensures timing predictability of data references and may lower worst-case execution time. This work studies the benefits of cache locking on massive multicore architectures with private caches in the context ...

متن کامل

Cache-aware Scheduling with Limited Preemptions

In safety-critical applications, the use of advanced real-time scheduling techniques is significantly limited by the difficulty of finding tight estimations of worst-case execution parameters. This problem is further complicated by the use of cache memories, which reduce the predictability of the executing threads due to cache misses. In this paper, we analyze the effects of preemptions on wors...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016