Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Add the horizontal elastic timetable overload checking rule to the cumulative constraint. #4401

Draft
wants to merge 12 commits into
base: main
Choose a base branch
from

Conversation

rhartert
Copy link

@rhartert rhartert commented Oct 6, 2024

This PR adds an implementation of the Horizontal Elastic TimeTable Overload Checker presented in Kameugne et al. 2024 to the Cumulative Constraint.

The propagator is not activated by default and can be enabled by setting the SatParameters.use_horizontal_overload_checking_in_cumulative to true.

For Reviewers

The PR is not ready for merge. Rather, it is meant as a "snapshot" to start discussing the implementation and whether the propagator is a net-positive for OR-Tools.

Ultimately, I plan to keep working on this PR to implement the "IQHE Edge-finder" rule (from the same paper) which dominates the TimeTable Edge-finder filter rule in terms of propagation. That being said, it is not clear whether this will be a net-positive in an LCG solver. The reason is that the explanations tend to be big and not easy to generalize.

@Mizux Mizux requested a review from lperron October 7, 2024 08:20
@Mizux Mizux added Feature Request Missing Feature/Wrapper Solver: CP-SAT Solver Relates to the CP-SAT solver labels Oct 7, 2024
@Mizux Mizux added this to the v9.12 milestone Oct 7, 2024
@Mizux Mizux requested a review from fdid-git October 7, 2024 08:20
@vitor1001 vitor1001 self-requested a review October 10, 2024 14:08
Copy link
Collaborator

@vitor1001 vitor1001 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for the PR!

I had a quick look at the CL and added some comments. I'm a new member of the team, so don't be surprised if I misunderstood something during the review.

}
profile_events_.emplace_back(-1, kMaxIntegerValue, IntegerValue(0),
ProfileEventType::kSentinel);
IncrementalSort(profile_events_.begin(), profile_events_.end());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that if you are not updating profile_events_ incrementally there is no point on sorting it with this method. I think plain std::sort would be cleaner.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right, changed the code to truly build the profile incrementally by updating the events rather than building it from scratch each time.

I don't think this drives much benefits yet but it opens the door for further incremental optimization using reversibles similarly to what is done in time-table.

const IntegerValue capacity = integer_trail_->UpperBound(capacity_);
if (window_end < ScheduleTasks(window_end, capacity)) {
AddScheduleTaskReason(window_end);
return helper_->ReportConflict();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If end_min is lower than the result of ScheduleTasks, can't you push a tighter end_min?

Copy link
Author

@rhartert rhartert Oct 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question, I guess we could use ScheduleTasks(window_end, capacity) - 1 but I would have to think a little more about it.

That being said, I don't think this propagator as it stands (compared to its extensions that update the start/end vars) is worth adding to OR-tools. I'd rather prefer to optimize the explanation of these versions.

case TaskType::kFull:
switch (event.event_type) {
case ProfileEventType::kStartMin:
delta_max += event.height;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I think changing demand_req and demand_max directly without the deltas might be simpler to understand.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, done 👍

DCHECK_LT(next_event, profile_events_.size());

IntegerValue next_time = profile_events_[next_event].time;
const IntegerValue length = next_time - time;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: please move this definition closer to where it is used.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that length is only used once, I've inlined it directly in the overload adjustment. IMO this does not hurt the code's readability as the computation is small and pretty clear.

// previous time points.
const IntegerValue overload_delta = demand_req - true_capa;

if (overload_delta > 0) { // adding overload
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I think this logic would be easier to read if the variable naming made clear what is the overload/removable/used for a time unit and what is the equivalent for the time window.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wasn't super happy with these names either; what do you think about the new ones? I've also slightly simplified the logic as there's no need for the std::min call.

@rhartert
Copy link
Author

rhartert commented Oct 20, 2024

Thanks a lot for the PR!

I had a quick look at the CL and added some comments. I'm a new member of the team, so don't be surprised if I misunderstood something during the review.

Hi @vitor1001, thank you for the review and the comments! I tried to address most of them, PTAL.

I'm planning to keep working on this PR and update the propagator to actually filter the start/end variables. Though, I would need a way to evaluate that the code is actually driving improvements (in terms of #nodes first, then in terms of speed).

Is there a c++ example I could use to compare the performance of different versions of the cumulative on a benchmark that matters to the team by any chance?

@lperron
Copy link
Collaborator

lperron commented Oct 22, 2024

rcpsp.tar.gz

Here is a set of cp-sat protobufs for rcpsp models (single and multi mode)

Copy link
Collaborator

@StevenGay StevenGay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a very clean implementation, simpler than the paper!
I have some questions since now you are the expert :)

max_freeable_overload;
overload = 0;
}
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if these formulas from the paper could be simplified much further.
I'm not sure whether the following approach would be equivalent. In each subwindow [time, next_time]:

  • the energy that can be used by tasks is (next_time - time) * true_capa.
  • the energy required by fixed tasks is (next_time - time) * demand_req.
    Then the energy required by full tasks over all subwindows is just the sum of energies of full tasks.
    If energy_required is strictly larger than energy_usable, then the set of full/fixed tasks associated to window_end are not feasible.

WDYT? The paper looks like it is scheduling each time point by hand, does this approach miss something?

// TODO: There's an opportunity to further generalize the reason if
// demand_max and overload are set to 0 before the end of the window.
// This can happen if the resources consumption has "humps" though it
// is unclear whether this pattern is likely in practice or not.
Copy link
Collaborator

@StevenGay StevenGay Oct 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the strongest opportunity would be to reduce the window.
It looks like this always considers all tasks in (-inf, window_end).
If a window [start_min_i, end_max_j) becomes overloaded, there should be some gain in including only those tasks in the window.

With the approach I described above, this would require starting from the last event at window_end and sweeping events towards the start:

  • initialize demand_req with the correct value, set demand_max = demand_req.
  • on start_min event, add energy of full tasks to energy_required
  • extend energy_required/usable as above
  • whenever energy_required > energy_usable, the checker fails immediately
  • explain only with full/fixed tasks inside W = [event_time, window_end)
  • (I'm not sure) Explanation of full tasks' start_min/end_max can be extended to W.

@Mizux Mizux modified the milestones: v9.12, v10.0 Nov 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature Request Missing Feature/Wrapper Solver: CP-SAT Solver Relates to the CP-SAT solver
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants