Skip to content

Commit

Permalink
sched/fair: use min capacity when evaluating idle backup cpus
Browse files Browse the repository at this point in the history
When we are calculating what the impact of placing a task on a specific
cpu is, we should include the information that there might be a minimum
capacity imposed upon that cpu which could change the performance and/or
energy cost decisions.

When choosing an idle backup CPU, favour CPUs that won't end up
running at a high OPP due to a min capacity cap imposed by external
actors.

Change-Id: I566623ffb3a7c5b61a23242dcce1cb4147ef8a4a
Signed-off-by: Ionela Voinescu <[email protected]>
Signed-off-by: Chris Redpath <[email protected]>
  • Loading branch information
ionela-voinescu authored and Nanhumly committed Aug 23, 2023
1 parent 749c894 commit bf136d3
Showing 1 changed file with 20 additions and 1 deletion.
21 changes: 20 additions & 1 deletion kernel/sched/fair.c
Original file line number Diff line number Diff line change
Expand Up @@ -7583,6 +7583,7 @@ static inline int find_best_target(struct task_struct *p, int *backup_cpu,
unsigned long min_wake_util = ULONG_MAX;
unsigned long target_max_spare_cap = 0;
unsigned long best_active_util = ULONG_MAX;
unsigned long target_idle_max_spare_cap = 0;
int best_idle_cstate = INT_MAX;
struct sched_domain *sd;
struct sched_group *sg;
Expand Down Expand Up @@ -7618,7 +7619,7 @@ static inline int find_best_target(struct task_struct *p, int *backup_cpu,
for_each_cpu_and(i, tsk_cpus_allowed(p), sched_group_cpus(sg)) {
unsigned long capacity_curr = capacity_curr_of(i);
unsigned long capacity_orig = capacity_orig_of(i);
unsigned long wake_util, new_util;
unsigned long wake_util, new_util, min_capped_util;

if (!cpu_online(i))
continue;
Expand All @@ -7640,6 +7641,16 @@ static inline int find_best_target(struct task_struct *p, int *backup_cpu,
* than the one required to boost the task.
*/
new_util = max(min_util, new_util);

/*
* Include minimum capacity constraint:
* new_util contains the required utilization including
* boost. min_capped_util also takes into account a
* minimum capacity cap imposed on the CPU by external
* actors.
*/
min_capped_util = max(new_util, capacity_min_of(i));

if (new_util > capacity_orig)
continue;

Expand Down Expand Up @@ -7767,6 +7778,12 @@ static inline int find_best_target(struct task_struct *p, int *backup_cpu,
/* Select idle CPU with lower cap_orig */
if (capacity_orig > best_idle_min_cap_orig)
continue;
/* Favor CPUs that won't end up running at a
* high OPP.
*/
if ((capacity_orig - min_capped_util) <
target_idle_max_spare_cap)
continue;

/*
* Skip CPUs in deeper idle state, but only
Expand All @@ -7780,6 +7797,8 @@ static inline int find_best_target(struct task_struct *p, int *backup_cpu,

/* Keep track of best idle CPU */
best_idle_min_cap_orig = capacity_orig;
target_idle_max_spare_cap = capacity_orig -
min_capped_util;
best_idle_cstate = idle_idx;
best_idle_cpu = i;
continue;
Expand Down

0 comments on commit bf136d3

Please sign in to comment.