Skip to content

Combine Packit jobs running Contest into fewer jobs#14392

Open
comps wants to merge 3 commits intoComplianceAsCode:masterfrom
comps:coalesce_packit
Open

Combine Packit jobs running Contest into fewer jobs#14392
comps wants to merge 3 commits intoComplianceAsCode:masterfrom
comps:coalesce_packit

Conversation

@comps
Copy link
Collaborator

@comps comps commented Feb 13, 2026

Description:

The new layout has much fewer jobs:

  • centos-stream-8-x86_64:contest-oscap
  • centos-stream-8-x86_64:contest-ansible
  • centos-stream-9-x86_64:contest-oscap
  • centos-stream-9-x86_64:contest-ansible
  • centos-stream-10-x86_64:contest-oscap
  • centos-stream-10-x86_64:contest-ansible

while keeping at least some separation for re-running.

Within each job, all tests still execute in parallel, as parallel tmt plans, so there shouldn't be any extra performance hit or added delay. The only disadvantage is that /packit retest-failed or manual re-trigger will run all plans within that job (ie. all CentOS Stream 8 Ansible testing).

The coalescing will however vastly reduce the amount of Testing Farm "requests", hopefully reducing load on TF a lot, in addition to reducing load on Github runners.


I opted for defining the plans on the Contest side instead of in tests/tmt/ because support for plan importing (as we did before) is limited and ie. doesn't allow filtering by tags.

Having the plans in Contest allows us to automatically filter out profiles which are subsets of others, tests that always fail by design (/static-checks/diff), etc., etc.

We also don't need to worry about which tests/profiles are on which CentOS Stream, since Contest has "adjust" rules for that already, and an empty plan is automatically SKIPPED by Testing Farm.

Rationale:

The current approach causes HEAVY load on Testing Farm's public ranch, one PR creation or push spinning up ~100 TF jobs (called "requests"), and Testing Farm public ranch can handle only ~130 running at any time.
Combining multiple parallel TMT plans into one TF "request" helps reduce that greatly, even if the underlying amount of cloud-reserved machines is the same.

In addition, this PR reduces our use of Github runners a lot, meaning more PRs can likely run CI at the same time.

The disadvantage is that there's a higher risk that any one oscap execution ends in oscap freezing, and us needing to restart the whole Packit testing job. So there is a balance to be struck:

  • The more Packit jobs (Github Action items) we have, the smaller the chance of one failing test to freeze, but the greater load on TF and Github, reducing CI speed.
  • The fewer Packit jobs we have, the lower the overall load, but the greater the chance of one test failure breaking the entire job.

The current approach tries to strike some balance (see the job list above).

Review Hints:

I have tested this in my fork, but this PR CI should test it again just fine. No rebasing of other PRs or impact on other PRs is expected, old Packit config should continue working well (for now, until maybe something in Contest changes a year down the road).

NOTE: There are extra commits in the PR cleaning up some things, see their descriptions for rationale.

This was likely a leftover from Beakerlib-era Fedora "downstream"
testing - when we stopped doing it, we moved the only remaining
valid test here.

However since we run cTests via Github Actions in upstream, this
extra test is likely unnecessary and complicates our Packit testing
setup.

Signed-off-by: Jiri Jaburek <comps@nomail.dom>
Keeping plans/tests separate is not necessary, and the use case
is isolated enough that it makes sense to keep all pieces of it
together.

Signed-off-by: Jiri Jaburek <comps@nomail.dom>
The new layout has much fewer jobs:

- centos-stream-8-x86_64:contest-oscap
- centos-stream-8-x86_64:contest-ansible
- centos-stream-9-x86_64:contest-oscap
- centos-stream-9-x86_64:contest-ansible
- centos-stream-10-x86_64:contest-oscap
- centos-stream-10-x86_64:contest-ansible

while keeping at least some separation for re-running.

Within each job, all tests still execute in parallel, as parallel
tmt plans, so there shouldn't be any extra performance hit or
added delay.

The coalescing will however vastly reduce the amount of Testing Farm
"requests", hopefully reducing load on TF a lot, in addition to
reducing load on Github runners.

---
I opted for defining the plans on the Contest side instead of in
tests/tmt/ because support for plan importing (as we did before)
is limited and ie. doesn't allow filtering by tags.

Having the plans in Contest allows us to automatically filter out
profiles which are subsets of others, tests that always fail by
design, etc., etc.

We also don't need to worry about which tests/profiles are on which
CentOS Stream, since Contest has "adjust" rules for that already,
and an empty plan is automatically SKIPPED by Testing Farm.

Signed-off-by: Jiri Jaburek <comps@nomail.dom>
@comps
Copy link
Collaborator Author

comps commented Feb 13, 2026

Since this still seems to be running the old .packit.yaml config, here is what the result looks like in my fork: comps#5 (see checks for the PR).

@Mab879 Mab879 added this to the 0.1.80 milestone Feb 13, 2026
@comps
Copy link
Collaborator Author

comps commented Feb 13, 2026

Actually, it works here too - just the checks preview right above comment box is broken (shows old workflows).

If you go to Checks of this PR, under Packit-as-a-Service, you'll see the new jobs and they all passed, ie. https://github.com/ComplianceAsCode/content/pull/14392/checks?check_run_id=63549779897

Copy link
Member

@Mab879 Mab879 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We will need to adjust the required checks and rebase all opens PRs for this to take effect. I will leave this open so we can coordinate when we want this to happen.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants