Case Study 3: Managing Contention for Shared Resources on Multicore Processors
Ja’Kedrick L. Pearson
Professor Hossein Besharatian
June 2, 2013
Memory contention is a state an OS memory manager can reside in when to many memory requests are issued to it from an active application possibly leading to a DOS condition specific to that application. A test was run on a group of applications several times, on three different schedules, each with two different parings sharing a memory domain. The three pairing permutations afforded each application an opportunity to run with each of the other three applications with the same ...view middle of the document...
Sphinx was paired with Gamess, while Soplex shared a domain with Namd. Sphinx was paired with Namd, while Soplex ran in the same domain with Gamess. The performance
levels are indicated in terms of the percentage of degradation from solo execution time, when the application ran alone on the system, meaning that the lower the numbers, the better the performance. There was a dramatic difference between the best and the worst schedules. The workload as a whole performed 20 percent better with the best schedule, while gains for the individual applications Soplex and Sphinx were as great as 50 percent. This indicates a clear incentive for assigning applications to cores according to the best possible schedule (Fedora, Blagodurov & Zhuravlev, 2010).
Using distributed intensity online (DIO), the authors constructed eight-application workloads containing from two to six memory-intensive applications. They picked eight workloads in total, all consisting of SPEC CPU2006 applications, and then executed them under the DIO and the default Linux scheduler on an AMD Opteron system featuring eight cores, four per memory domain. The performance improvement relative to default has been computed as the average improvement for all applications in the workload. DIO renders workload-average performance improvements of up to 11 percent. Another potential use of DIO is as a way to ensure quality of service for critical applications since DIO essentially provides a means to make sure the worst scheduling assignment is never selected, while the default scheduler may occasionally suffer as a consequence of a bad thread placement.
Power DI works as follows: assuming a centralized scheduler has knowledge of the entire computing infrastructure and distributes incoming applications across all systems, Power DI clusters all incoming applications on as few machines as possible, except for those applications deemed to be memory intensive. To determine if an application is memory-intensive, Power DI uses an experimentally derived threshold of 1,000 misses per million instructions; an application whose LLC miss rate exceeds that amount is considered memory intensive. Power DI, on the other hand, is able to adjust to the properties of the workload and minimize EDP in all cases, beating both Spread and Cluster for every single workload.
The authors found that scheduling algorithms that use this heuristic to avoid contention have the potential to reduce the overall completion time for workloads, avoid poor...