Skip to content

Conversation

Cerberus22
Copy link

@Cerberus22 Cerberus22 commented Aug 12, 2025

Related issues

There is no related issue.

Changes introduces:

Variables were added to the model describing start-up and shut-down behavior of generators, if their unit commitment method is set to 3bin-., where . can be replaced with anything. Additionally, constraints were added to ensure the variables are correctly related to the existing model. Finally, a test case was added that will demonstrate all the constraints using the start-up & shut-down variables that will be added in later pull requests.

Checklist

  • I am following the contributing guidelines
  • Tests are passing
  • Lint workflow is passing
  • Docs were updated and workflow is passing

Copy link

codecov bot commented Aug 12, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 98.38%. Comparing base (b56e625) to head (7d29601).
⚠️ Report is 5 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1318      +/-   ##
==========================================
+ Coverage   98.31%   98.38%   +0.07%     
==========================================
  Files          35       38       +3     
  Lines        1306     1366      +60     
==========================================
+ Hits         1284     1344      +60     
  Misses         22       22              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@abelsiqueira abelsiqueira added the benchmark PR only - Run benchmark on PR label Aug 15, 2025
Copy link
Member

@abelsiqueira abelsiqueira left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR. I have some preliminary comments to complement the e-mail discussion.

I've enabled the workflows to run so the tests actually run. The benchmark can't run because it's from a fork, so you have to manually look into the documentation for instructions on how to run the benchmarks locally, and then paste the results here.

I haven't checked the content in details, and I won't evaluate the math behind it. Some comments on the rest:

  • The file name 3bin is not clear. It would be better to be explicit on what is supposed to be.
  • From recent experiences, I think it is better to split the 3bin file into two or three files (e.g., for simple and compact). Maybe create a folder for organisation, if you think it's necessary.
  • From recent experiences, we are trying to use Common Table Expressions (CTEs) in the query to improve readability. We don't have many examples, and not all queries require it, so maybe this is not necessary here. I leave the comment so we remember to check later.
  • Since you created a new case study, update test-case-studies with it. Otherwise, the new case study doesn't seem to be used anywhere.
  • You need documentation, at least for the formulation.
  • I don't get what the argument is 3bin-* where * is anything. Why is it left so broad?
  • It's better to first create the new case study in a separate PR with the test update. Makes it easier for reviewers.

Thanks!

@urosgluscevic
Copy link
Contributor

@abelsiqueira Thank you for the comment! I have split the 3bin.jl file into 3 files as you suggested, and given them clearer names. I have also added our case study to test-case-studies.

As for the question regarding the argument being named 3bin-*, we did this because we will be adding many different unit_commitment_methods, representing different levels of detail which can be used to model generator start-ups and shut-downs, but the constraints in this PR will be used in all of those levels of detail.

I also had a question about the benchmarks - should we change anything in the benchmarks, since we have added new variables and constraints, which only get created when the new 3bin-* argument is used?

Also, regarding your last point, is this something we should keep in mind for future PRs, or should we remove the test case from this PR?

Finally, I will hopefully push the changes tomorrow, after adding the documentation! Thank you very much for your help!

@abelsiqueira
Copy link
Member

If you change the schema, you might have to change the benchmark test case. It doesn't seem to be the case here, since you only changed one option of an existing column.
Ideally, we would like to have more benchmarks, but creating benchmark test cases is hard (see comment on e-mail about TulipaBuilder).

I just noticed one issue. I was expecting that you would need to update the src/input-schemas.json file to add the 3bin-* options to the oneOf constraints. Validation should fail otherwise. Since it didn't, we have a bug for validation catching that. I'll create an issue.

About 3bin-*, I was gonna suggest maybe creating a flag to indicate 3bin and then specifying in another column which type of 3bin. However, unit_commitment and unit_commitment_method is already like this, so maybe it's overkill. Other reviewers understand the model better, so I'll leave the opinion to them.

If I understood your last question, it is about whether it makes sense to split the current PR to move the test case out. My general guideline is to do it if you have enough time, because the time gains are generally good for all involved. The time to review A + B is usually longer than reviewing A, then reviewing B, especially if one of the two is complex. This means that you're waiting on reviews for longer as well.
However, if someone is currently reviewing the thing, it might be better to discuss with them.
What we normally try to do here is 1. if possible, split the work into PRs; 2. if it makes sense, split the PR into multiple commits so that we can review the PRs per commit in order.
Also, the quality of the review improves for smaller PRs.
Going back to specifically this PR. I would create the first PR just adding the test case with the case study test, and updating the schema and other minimal places. Then, the next PR creates the variables and constraints. That said, I'm also concerned about the validation issue, so I would do that first.
@datejada and @gnawin have more experience splitting new constraints, but if you need help with the git side of things, I can also help. They might also have a different opinion on the splitting, since they'll be reviewing most of the PR.

@urosgluscevic
Copy link
Contributor

Here are the results of running the benchmark:

image

@urosgluscevic
Copy link
Contributor

@gnawin @datejada @abelsiqueira Since the case study we added will be changing when we add more constraints and new levels of detail, maybe we could just remove it from this PR and merge the final version at the very end? What do you think?

Copy link
Member

@datejada datejada left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@urosgluscevic thanks for the PR. I've reviewed the formulation and I think it is good to start like this, by sets of constraints one by one. I suggest changes in the names of the files and functions, but, since it is a bit too much, I created a PR urosgluscevic#10 to your repo. If you merge it into your branch, it will update this one.

Regarding @abelsiqueira comments, I have added the method to the schema file, and for the time being let's start with the 3var-* approach, but let's see how it grows and if we need to improve it later.

Finally, I have created issue #1331 to split the current file and make it explicit and easy to follow with the new constraints

Thanks! great work here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
benchmark PR only - Run benchmark on PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants