feat: adding CommittedResource CRD#730
Conversation
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 26 minutes and 23 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughAdds a cluster-scoped CommittedResource CRD (types, enums, Spec/Status, print-columns, scheme registration), autogenerated deepcopy methods, multicluster routing and manager wiring for the kind, Helm values and RBAC updates, and a conversion function mapping CommittedResource into internal CommitmentState with validation. Changes
Sequence DiagramsequenceDiagram
participant Client
participant Manager
participant Router as Multicluster Router
participant Converter as Commitment Converter
participant StateStore as Commitment State
Client->>Manager: Submit CommittedResource
activate Manager
Manager->>Router: Request match(obj, cluster labels)
activate Router
Router->>Router: Validate type, AZ label/spec
Router-->>Manager: Match result / error
deactivate Router
Manager->>Converter: Convert CR -> CommitmentState
activate Converter
Converter->>Converter: Validate resourceType (memory) and UUID
Converter->>StateStore: Build CommitmentState (capacity, times, metadata)
Converter-->>Manager: CommitmentState / error
deactivate Converter
Manager-->>Client: Accept / Reject
deactivate Manager
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
api/v1alpha1/committed_resource_types.go (1)
45-48:SchedulingDomainenum is broader than the resource's apparent scope.The enum accepts
nova;cinder;manila;machines;pods, butResourceTypeis onlymemory/coresand the reservation machinery described in adjacent code (flavor groups, memory-backed Reservation slots) is Nova-specific. Acceptingcinder/manila/machines/podshere allows creation of CRs that no controller can honor. If this is intentional future-proofing, a comment would help; otherwise consider narrowing the enum to the supported domain(s).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/v1alpha1/committed_resource_types.go` around lines 45 - 48, The SchedulingDomain enum on SchedulingDomain in committed_resource_types.go currently allows values nova;cinder;manila;machines;pods which is broader than the implemented ResourceType/reservation logic (e.g., flavor groups and memory-backed Reservation slots) that only supports Nova; either restrict the kubebuilder validation enum to the supported domain(s) (e.g., change +kubebuilder:validation:Enum to only "nova" or "nova;pods" if pods are supported) or, if the broader set is intentional future-proofing, add a clear comment on SchedulingDomain explaining that only Nova is currently implemented and which controllers will handle other domains in the future; update the SchedulingDomain declaration and any API docs accordingly so CRs cannot be created with unsupported domains unless intentionally allowed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@api/v1alpha1/committed_resource_types.go`:
- Around line 81-98: EndTime is currently declared as required (EndTime
metav1.Time with +kubebuilder:validation:Required) which conflicts with internal
types and logic that treat EndTime as optional; change the CommittedResource/C
struct field EndTime to a pointer type (EndTime *metav1.Time) and update its
kubebuilder tag to +kubebuilder:validation:Optional so unbounded commitments can
be persisted; ensure callers and code paths that inspect EndTime (e.g.,
FromChangeCommitmentTargetState and any code in state.go or the Commitment model
that checks Spec.EndTime != nil or IsZero()) continue to handle a nil EndTime
properly.
In `@internal/scheduling/reservations/commitments/state.go`:
- Around line 210-234: FromCommittedResource should guard against a zero EndTime
and validate the UUID like other constructors: only set state.EndTime when
!cr.Spec.EndTime.IsZero() (so downstream expiry logic doesn't treat zero time as
already expired), and validate cr.Spec.CommitmentUUID against
commitmentUUIDPattern (return an error if it doesn't match) before assigning to
state.CommitmentUUID; also ensure DomainID handling mirrors the other
constructors (FromCommitment/FromChangeCommitmentTargetState) so any
symmetry/validation applied there is replicated here.
---
Nitpick comments:
In `@api/v1alpha1/committed_resource_types.go`:
- Around line 45-48: The SchedulingDomain enum on SchedulingDomain in
committed_resource_types.go currently allows values
nova;cinder;manila;machines;pods which is broader than the implemented
ResourceType/reservation logic (e.g., flavor groups and memory-backed
Reservation slots) that only supports Nova; either restrict the kubebuilder
validation enum to the supported domain(s) (e.g., change
+kubebuilder:validation:Enum to only "nova" or "nova;pods" if pods are
supported) or, if the broader set is intentional future-proofing, add a clear
comment on SchedulingDomain explaining that only Nova is currently implemented
and which controllers will handle other domains in the future; update the
SchedulingDomain declaration and any API docs accordingly so CRs cannot be
created with unsupported domains unless intentionally allowed.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: b048a15b-b309-40c2-93e8-b174ed33f8b2
📒 Files selected for processing (6)
api/v1alpha1/committed_resource_types.goapi/v1alpha1/zz_generated.deepcopy.gocmd/manager/main.gohelm/bundles/cortex-nova/values.yamlinternal/scheduling/reservations/commitments/state.gopkg/multicluster/routers.go
| // SchedulingDomain specifies the scheduling domain for this committed resource (e.g., "nova", "ironcore"). | ||
| // +kubebuilder:validation:Enum=nova;cinder;manila;machines;pods | ||
| // +kubebuilder:validation:Required | ||
| SchedulingDomain SchedulingDomain `json:"schedulingDomain"` |
There was a problem hiding this comment.
ironcore is explicitly mentioned as example but not a possible input value given validation
There was a problem hiding this comment.
Good catch, I removed the validation.
| // Amount is the total committed quantity. | ||
| // Examples: "1280Gi" for RAM, "40" for CPU cores. | ||
| // +kubebuilder:validation:Required | ||
| Amount resource.Quantity `json:"amount"` |
There was a problem hiding this comment.
do we not work in multiples of smallest flavor here?
There was a problem hiding this comment.
The CRD should be self contained, so not depending implicitly on e.g. flavor group definition, also considering that we were told the groups are likely to change afterwards. So the Liquid API will do the translation and writes the raw amount here
| // Amount is the total committed quantity. | ||
| // Examples: "1280Gi" for RAM, "40" for CPU cores. | ||
| // +kubebuilder:validation:Required | ||
| Amount resource.Quantity `json:"amount"` |
There was a problem hiding this comment.
Also, does this mean that we would create multiple committed resources for one reservation so we can keep track of both cpu cores and RAM?
I would have expected something more like this, but maybe I am off-track here:
// Example:
// cpu: "40"
// memory: "1280Gi"
// +kubebuilder:validation:Required
Resources corev1.ResourceList `json:"resources"`
There was a problem hiding this comment.
With Limes we have always one resource per commitment. So we have e.g. two commitment for the same project, when the user wants to specify memory and cpu for a GP group. Still our controller can read both related CR CRDs to do more advanced flavor selection.. considering the CPU cores the user has in mind while choosing the right flavor sizes for the reservations of the memory cr crd.
I also thought about using a ResourceList with always using a single resource in there.. but it feels like doing optimization for a use case we dont have atm.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (3)
api/v1alpha1/committed_resource_types.go (3)
45-46: Add an explicit validation marker forSchedulingDomain.After removing the enum validation (per earlier feedback about
ironcore), this field no longer has any+kubebuilder:validation:Required/Optionalmarker, and the JSON tag has noomitempty. All other scalar spec fields here are explicitly annotated. Without a marker, the generated CRD's required-list behavior for this field depends on controller-gen defaults and is inconsistent with the rest of the spec. Add+kubebuilder:validation:Required(orOptionalwithomitempty) to make the contract explicit.🛠️ Proposed fix
// SchedulingDomain specifies the scheduling domain for this committed resource (e.g., "nova", "ironcore"). + // +kubebuilder:validation:Required SchedulingDomain SchedulingDomain `json:"schedulingDomain"`🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/v1alpha1/committed_resource_types.go` around lines 45 - 46, Add an explicit kubebuilder validation marker to the SchedulingDomain field to make the CRD contract explicit: insert a comment line "+kubebuilder:validation:Required" immediately above the SchedulingDomain SchedulingDomain `json:"schedulingDomain"` field (or use "+kubebuilder:validation:Optional" and change the JSON tag to `json:"schedulingDomain,omitempty"` if you intend it to be optional). This change should be made on the SchedulingDomain field declaration so controller-gen generates the required/optional behavior consistently with the other scalar spec fields.
52-61: Consider tighteningAmountvalidation for cores.
Amountis typed asresource.Quantityfor bothmemoryandcores. That works for memory (binary SI) but accepts non-integer / sub-unit values forcores(e.g."0.5","100m") which the arithmetic-check path for cores will misinterpret once implemented. Consider either a CEL validation rule couplingresourceType == "cores"to integer quantities, or handling the sanity check explicitly in the futureFromCommittedResourcecores branch so bad specs are rejected at admission rather than silently miscounted later.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/v1alpha1/committed_resource_types.go` around lines 52 - 61, The Amount field currently allows arbitrary resource.Quantity values for both memory and cores; tighten validation so that when ResourceType == "cores" Amount represents an integer core count (no millicores or fractional values). Update the API validation on the struct (referencing ResourceType CommittedResourceType and Amount resource.Quantity) to add a conditional CEL or kubebuilder rule that enforces integer-only quantities for the cores case (or alternatively add a pattern/validation that forbids suffixes like "m" and fractional quantities when ResourceType is cores) so invalid specs are rejected at admission instead of relied on later in FromCommittedResource or the cores handling branch.
119-135: Add listType markers to optimize Server-Side Apply behavior for status slices.
AssignedVMsandConditionsshould include listType markers. ForConditions, use+listType=mapwith+listMapKey=type(aligning with the existingpatchStrategy:"merge" patchMergeKey:"type"tags) to enable proper server-side-apply merging and support multi-controller updates. ForAssignedVMs, use+listType=setsince UUIDs are unique. These markers follow Kubernetes API conventions and Kubebuilder best practices.Proposed fix
// AssignedVMs holds the UUIDs of VMs deterministically assigned to this committed resource. // Populated by the usage reconciler; used to compute UsedAmount and drive the quota controller. // +kubebuilder:validation:Optional + // +listType=set AssignedVMs []string `json:"assignedVMs,omitempty"` @@ // Conditions holds the current status conditions. // +kubebuilder:validation:Optional + // +listType=map + // +listMapKey=type Conditions []metav1.Condition `json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type"`🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/v1alpha1/committed_resource_types.go` around lines 119 - 135, Add Kubernetes listType markers for the status slices: annotate AssignedVMs with a kubebuilder marker +listType=set (since entries are unique UUIDs) and annotate Conditions with +listType=map and +listMapKey=type (to match the existing patchStrategy:"merge" patchMergeKey:"type"), so server-side-apply can correctly merge/handle those lists; locate the AssignedVMs and Conditions fields in the CommittedResourceStatus struct and add the markers immediately above their comments.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@api/v1alpha1/committed_resource_types.go`:
- Around line 93-96: The DryRun field's comment promises controller behavior
that doesn't exist; update the API docs or consumers accordingly: either change
the DryRun comment on the DryRun bool field to explicitly state "reserved; not
honored yet" (so callers know it currently has no effect), or add filtering in
consumers such as FromCommittedResource to ignore CRs with DryRun==true until a
controller that self-deletes is implemented — reference the DryRun field on the
committed resource type and the FromCommittedResource consumer to locate where
to change the comment or add the filter.
---
Nitpick comments:
In `@api/v1alpha1/committed_resource_types.go`:
- Around line 45-46: Add an explicit kubebuilder validation marker to the
SchedulingDomain field to make the CRD contract explicit: insert a comment line
"+kubebuilder:validation:Required" immediately above the SchedulingDomain
SchedulingDomain `json:"schedulingDomain"` field (or use
"+kubebuilder:validation:Optional" and change the JSON tag to
`json:"schedulingDomain,omitempty"` if you intend it to be optional). This
change should be made on the SchedulingDomain field declaration so
controller-gen generates the required/optional behavior consistently with the
other scalar spec fields.
- Around line 52-61: The Amount field currently allows arbitrary
resource.Quantity values for both memory and cores; tighten validation so that
when ResourceType == "cores" Amount represents an integer core count (no
millicores or fractional values). Update the API validation on the struct
(referencing ResourceType CommittedResourceType and Amount resource.Quantity) to
add a conditional CEL or kubebuilder rule that enforces integer-only quantities
for the cores case (or alternatively add a pattern/validation that forbids
suffixes like "m" and fractional quantities when ResourceType is cores) so
invalid specs are rejected at admission instead of relied on later in
FromCommittedResource or the cores handling branch.
- Around line 119-135: Add Kubernetes listType markers for the status slices:
annotate AssignedVMs with a kubebuilder marker +listType=set (since entries are
unique UUIDs) and annotate Conditions with +listType=map and +listMapKey=type
(to match the existing patchStrategy:"merge" patchMergeKey:"type"), so
server-side-apply can correctly merge/handle those lists; locate the AssignedVMs
and Conditions fields in the CommittedResourceStatus struct and add the markers
immediately above their comments.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 9367c646-4538-4d02-a3d3-3ca9e48ee08d
📒 Files selected for processing (4)
api/v1alpha1/committed_resource_types.goapi/v1alpha1/zz_generated.deepcopy.gohelm/library/cortex/templates/rbac/role.yamlinternal/scheduling/reservations/commitments/state.go
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (3)
api/v1alpha1/committed_resource_types.go (3)
92-93: Nit: trailing blank line inside struct.Line 92 is an empty line right before the closing brace of
CommittedResourceSpec. Per the repo's guideline to avoid unnecessary empty lines between code segments, drop it.✏️ Proposed change
State CommitmentStatus `json:"state"` - }As per coding guidelines: "avoid unnecessary empty lines between code segments".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/v1alpha1/committed_resource_types.go` around lines 92 - 93, Remove the unnecessary blank line inside the CommittedResourceSpec struct: edit the CommittedResourceSpec definition to delete the empty line immediately before its closing brace so the last field and the closing brace are adjacent.
88-93: Consider cross-field validation forStatevs. timestamps.The enum values are well-documented in terms of their temporal relationship to
StartTime/EndTime/ConfirmedAt(e.g.,confirmedimpliesConfirmedAtis set andStartTimehas been reached;expiredimpliesEndTimeis past). Since these are all optional, a client could persist inconsistent combinations (e.g.,state: confirmedwith noconfirmedAt). If you want the CRD API itself to enforce the invariants described in the CommitmentStatus comments, a+kubebuilder:validation:XValidationCEL rule on the spec would catch this at admission time; otherwise this has to be enforced in every producer/consumer. Not a blocker, but worth a follow-up if the syncer is expected to be the sole writer today.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/v1alpha1/committed_resource_types.go` around lines 88 - 93, Add CEL XValidation rules to the CRD for cross-field validation of CommitmentStatus values so the API rejects inconsistent combos (e.g., state == "confirmed" requires ConfirmedAt set, state == "expired" requires EndTime set, "confirmed" should imply StartTime <= ConfirmedAt, etc.). Add one or more kubebuilder tags like +kubebuilder:validation:XValidation:rule="<CEL expression>" on the relevant spec struct (referencing State, CommitmentStatus, StartTime, EndTime, ConfirmedAt) to assert required fields per state; for example add rules that require has(self.confirmedAt) when self.state == "confirmed", require has(self.endTime) when self.state == "expired", and similar rules for "planned"/"pending"/"guaranteed"/"superseded". Ensure each rule uses the correct field paths (self.StartTime/self.EndTime/self.ConfirmedAt) and add tests to cover the invalid permutations.
129-131: Add kubebuilder markers toConditionsfield for proper Server-Side Apply behavior.The field uses
patchStrategyandpatchMergeKeyin the JSON tag, but is missing the corresponding+listType=mapand+listMapKey=typekubebuilder markers. Without these markers, controller-gen treats the slice as atomic during CRD generation, causing Server-Side Apply to append duplicate conditions of the sametypewhen multiple controllers reconcile. The+listType=mapand+listMapKey=typemarkers enable SSA to treat the conditions as a map indexed by thetypefield, allowing safe concurrent updates.✏️ Proposed change
// Conditions holds the current status conditions. // +kubebuilder:validation:Optional + // +listType=map + // +listMapKey=type + // +patchStrategy=merge + // +patchMergeKey=type Conditions []metav1.Condition `json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type"`🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/v1alpha1/committed_resource_types.go` around lines 129 - 131, Add kubebuilder list markers to the Conditions field so controller-gen treats it as a map keyed by the condition "type": above the Conditions []metav1.Condition declaration, add the markers `+listType=map` and `+listMapKey=type` (in the same comment block as the existing `+kubebuilder:validation:Optional`) so Server-Side Apply can merge conditions by their `type` instead of treating the slice as atomic; keep the existing json tag (`json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type"`) intact.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@api/v1alpha1/committed_resource_types.go`:
- Around line 45-46: The SchedulingDomain field lacks the kubebuilder validation
marker; add the line "+kubebuilder:validation:Required" immediately above the
SchedulingDomain declaration (the SchedulingDomain SchedulingDomain
`json:"schedulingDomain"` field) so the generated CRD treats it as required
consistent with other spec fields.
---
Nitpick comments:
In `@api/v1alpha1/committed_resource_types.go`:
- Around line 92-93: Remove the unnecessary blank line inside the
CommittedResourceSpec struct: edit the CommittedResourceSpec definition to
delete the empty line immediately before its closing brace so the last field and
the closing brace are adjacent.
- Around line 88-93: Add CEL XValidation rules to the CRD for cross-field
validation of CommitmentStatus values so the API rejects inconsistent combos
(e.g., state == "confirmed" requires ConfirmedAt set, state == "expired"
requires EndTime set, "confirmed" should imply StartTime <= ConfirmedAt, etc.).
Add one or more kubebuilder tags like
+kubebuilder:validation:XValidation:rule="<CEL expression>" on the relevant spec
struct (referencing State, CommitmentStatus, StartTime, EndTime, ConfirmedAt) to
assert required fields per state; for example add rules that require
has(self.confirmedAt) when self.state == "confirmed", require has(self.endTime)
when self.state == "expired", and similar rules for
"planned"/"pending"/"guaranteed"/"superseded". Ensure each rule uses the correct
field paths (self.StartTime/self.EndTime/self.ConfirmedAt) and add tests to
cover the invalid permutations.
- Around line 129-131: Add kubebuilder list markers to the Conditions field so
controller-gen treats it as a map keyed by the condition "type": above the
Conditions []metav1.Condition declaration, add the markers `+listType=map` and
`+listMapKey=type` (in the same comment block as the existing
`+kubebuilder:validation:Optional`) so Server-Side Apply can merge conditions by
their `type` instead of treating the slice as atomic; keep the existing json tag
(`json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type"`)
intact.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 6c71d871-1b1c-4d74-af40-998e17ef15a0
📒 Files selected for processing (1)
api/v1alpha1/committed_resource_types.go
| // SchedulingDomain specifies the scheduling domain for this committed resource (e.g., "nova", "ironcore"). | ||
| SchedulingDomain SchedulingDomain `json:"schedulingDomain"` |
There was a problem hiding this comment.
SchedulingDomain missing validation marker.
Every other field in this spec has an explicit +kubebuilder:validation:Required (or Optional) marker, but SchedulingDomain has neither. Since the comment and surrounding fields imply it is mandatory, add the marker for consistency and to avoid it being silently treated as optional by the generated CRD schema.
✏️ Proposed change
// SchedulingDomain specifies the scheduling domain for this committed resource (e.g., "nova", "ironcore").
+ // +kubebuilder:validation:Required
SchedulingDomain SchedulingDomain `json:"schedulingDomain"`📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // SchedulingDomain specifies the scheduling domain for this committed resource (e.g., "nova", "ironcore"). | |
| SchedulingDomain SchedulingDomain `json:"schedulingDomain"` | |
| // SchedulingDomain specifies the scheduling domain for this committed resource (e.g., "nova", "ironcore"). | |
| // +kubebuilder:validation:Required | |
| SchedulingDomain SchedulingDomain `json:"schedulingDomain"` |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@api/v1alpha1/committed_resource_types.go` around lines 45 - 46, The
SchedulingDomain field lacks the kubebuilder validation marker; add the line
"+kubebuilder:validation:Required" immediately above the SchedulingDomain
declaration (the SchedulingDomain SchedulingDomain `json:"schedulingDomain"`
field) so the generated CRD treats it as required consistent with other spec
fields.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
api/v1alpha1/committed_resource_types.go (1)
45-46:⚠️ Potential issue | 🟡 MinorAdd the missing validation marker for
SchedulingDomain.
SchedulingDomainis the only required-looking spec field here without an explicit+kubebuilder:validation:Requiredmarker, so the generated schema can drift from the intended API contract.Proposed fix
// SchedulingDomain specifies the scheduling domain for this committed resource (e.g., "nova", "ironcore"). + // +kubebuilder:validation:Required SchedulingDomain SchedulingDomain `json:"schedulingDomain"`🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/v1alpha1/committed_resource_types.go` around lines 45 - 46, The SchedulingDomain field is missing the kubebuilder required validation marker; add the comment marker "+kubebuilder:validation:Required" immediately above the SchedulingDomain field declaration in the struct (the SchedulingDomain field in committed_resource_types.go) so the generated OpenAPI schema marks it as required, then re-run code generation/CRD generation to update the schema.
🧹 Nitpick comments (1)
api/v1alpha1/committed_resource_types.go (1)
128-130: Add+listType=mapand+listMapKey=typemarkers for CRD list-map semantics.The
Conditionsfield inCommittedResourceStatushaspatchStrategyandpatchMergeKeystruct tags, but the kubebuilder markers needed to express these semantics in the generated CRD schema are missing. Add the markers to ensure the CRD properly specifiesx-kubernetes-list-type: mapandx-kubernetes-list-map-keys: ["type"], which enables condition updates to merge by type rather than replace the entire list.Proposed fix
// Conditions holds the current status conditions. // +kubebuilder:validation:Optional + // +listType=map + // +listMapKey=type Conditions []metav1.Condition `json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type"`🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/v1alpha1/committed_resource_types.go` around lines 128 - 130, The CRD's Conditions slice in the CommittedResourceStatus struct is missing kubebuilder markers for list-map semantics; add the markers "+listType=map" and "+listMapKey=type" immediately above the Conditions field (the slice declared as Conditions []metav1.Condition `json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type"`) so the generated CRD includes x-kubernetes-list-type: map and x-kubernetes-list-map-keys: ["type"] and condition updates merge by type rather than replacing the whole list.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@api/v1alpha1/committed_resource_types.go`:
- Around line 57-61: The Amount field on the CRD currently allows zero/negative
quantities; add API-level validation to reject non-positive values by annotating
the Amount field (resource.Quantity `Amount`) with a kubebuilder validation such
as +kubebuilder:validation:Minimum=1 or a CEL rule enforcing >0, and/or add a
defensive check in the consumer path (the code that reads cr.Spec.Amount.Value()
in internal/scheduling/reservations/commitments/state.go before calling
CommitmentState.TotalMemoryBytes) to return an error if Amount.Value() <= 0 so
invalid CRs cannot reach scheduling state.
- Around line 88-91: Add a state guard inside the FromCommittedResource function
to reject non-active commitment states: check the incoming CommittedResource's
cr.Spec.State and return an error unless it equals
v1alpha1.CommitmentStatusGuaranteed or v1alpha1.CommitmentStatusConfirmed; this
ensures FromCommittedResource only converts guaranteed/confirmed commitments
into CommitmentState and rejects planned, pending, superseded, and expired
values.
---
Duplicate comments:
In `@api/v1alpha1/committed_resource_types.go`:
- Around line 45-46: The SchedulingDomain field is missing the kubebuilder
required validation marker; add the comment marker
"+kubebuilder:validation:Required" immediately above the SchedulingDomain field
declaration in the struct (the SchedulingDomain field in
committed_resource_types.go) so the generated OpenAPI schema marks it as
required, then re-run code generation/CRD generation to update the schema.
---
Nitpick comments:
In `@api/v1alpha1/committed_resource_types.go`:
- Around line 128-130: The CRD's Conditions slice in the CommittedResourceStatus
struct is missing kubebuilder markers for list-map semantics; add the markers
"+listType=map" and "+listMapKey=type" immediately above the Conditions field
(the slice declared as Conditions []metav1.Condition
`json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type"`) so the
generated CRD includes x-kubernetes-list-type: map and
x-kubernetes-list-map-keys: ["type"] and condition updates merge by type rather
than replacing the whole list.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: a22df303-2a55-463c-9a9d-4a776c3cb30e
📒 Files selected for processing (1)
api/v1alpha1/committed_resource_types.go
| // Amount is the total committed quantity. | ||
| // memory: MiB expressed in K8s binary SI notation (e.g. "1280Gi", "640Mi"). | ||
| // cores: integer core count (e.g. "40"). | ||
| // +kubebuilder:validation:Required | ||
| Amount resource.Quantity `json:"amount"` |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Inspect generated CRDs and consumers for amount validation.
# Expected: Either the generated CommittedResource CRD has amount positivity/type validation,
# or Go consumers reject non-positive amounts before using Spec.Amount.Value().
set -euo pipefail
echo "Generated CRD snippets containing amount validation:"
fd -i 'committedresource|crd|cortex' --extension yaml --extension yml --extension json \
--exec rg -n -C5 'amount:|x-kubernetes-validations|minimum|pattern' {}
echo
echo "Go consumers of Spec.Amount:"
rg -n -C5 --type=go '\.Spec\.Amount|Amount\.Value\(|Amount\.Sign\(|Amount\.MilliValue\('Repository: cobaltcore-dev/cortex
Length of output: 21606
🏁 Script executed:
find . -name "*committed*" -type f | grep -E "\.(go|yaml|yml)$"Repository: cobaltcore-dev/cortex
Length of output: 107
🏁 Script executed:
# Check the committed_resource_types.go file for kubebuilder validation markers
cat -n api/v1alpha1/committed_resource_types.go | head -100Repository: cobaltcore-dev/cortex
Length of output: 5270
🏁 Script executed:
# Look for the consumer code
cat -n internal/scheduling/reservations/commitments/state.go | sed -n '200,250p'Repository: cobaltcore-dev/cortex
Length of output: 2122
🏁 Script executed:
# Search for admission webhooks and validation rules
rg -n "Amount|validation.*positive|minimum.*Amount" --type=go | grep -i "commit\|valid\|webhook"Repository: cobaltcore-dev/cortex
Length of output: 8691
🏁 Script executed:
# Check if there are any validation webhook implementations
fd -type f -name "*.go" | xargs rg -l "ValidatingWebhook|Validate" | head -20Repository: cobaltcore-dev/cortex
Length of output: 1333
🏁 Script executed:
# Search for kubebuilder CEL validation rules (x-kubernetes-validations)
rg -n "x-kubernetes-validations|kubebuilder:validation:Minimum" api/Repository: cobaltcore-dev/cortex
Length of output: 47
Add validation to reject non-positive committed amounts.
Amount currently accepts zero, negative quantities (e.g., -1Gi), and the CRD has no validation rules enforcing positivity. The consumer in internal/scheduling/reservations/commitments/state.go:226 forwards cr.Spec.Amount.Value() to CommitmentState.TotalMemoryBytes without validation, allowing invalid amounts to reach scheduling state.
Add +kubebuilder:validation:Minimum=1 or a CEL validation rule to the Amount field to reject non-positive values at the API boundary, or validate in the consumer before constructing CommitmentState.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@api/v1alpha1/committed_resource_types.go` around lines 57 - 61, The Amount
field on the CRD currently allows zero/negative quantities; add API-level
validation to reject non-positive values by annotating the Amount field
(resource.Quantity `Amount`) with a kubebuilder validation such as
+kubebuilder:validation:Minimum=1 or a CEL rule enforcing >0, and/or add a
defensive check in the consumer path (the code that reads cr.Spec.Amount.Value()
in internal/scheduling/reservations/commitments/state.go before calling
CommitmentState.TotalMemoryBytes) to return an error if Amount.Value() <= 0 so
invalid CRs cannot reach scheduling state.
| // State is the lifecycle state of the commitment. | ||
| // +kubebuilder:validation:Enum=planned;pending;guaranteed;confirmed;superseded;expired | ||
| // +kubebuilder:validation:Required | ||
| State CommitmentStatus `json:"state"` |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Check whether all FromCommittedResource call sites filter Spec.State.
# Expected: Each call path should restrict processing to CommitmentStatusGuaranteed/Confirmed
# before active CommitmentState objects are produced.
set -euo pipefail
echo "FromCommittedResource call sites:"
rg -n -C8 --type=go '\bFromCommittedResource\s*\('
echo
echo "State checks in Go code:"
rg -n -C5 --type=go 'CommitmentStatus(Guaranteed|Confirmed|Planned|Pending|Superseded|Expired)|\.Spec\.State'Repository: cobaltcore-dev/cortex
Length of output: 5131
🏁 Script executed:
# Get the full FromCommittedResource function implementation
sed -n '210,250p' internal/scheduling/reservations/commitments/state.goRepository: cobaltcore-dev/cortex
Length of output: 1519
🏁 Script executed:
# Find all call sites of FromCommittedResource
rg -n '\bFromCommittedResource\b' --type=goRepository: cobaltcore-dev/cortex
Length of output: 343
🏁 Script executed:
# Search for FromCommittedResource with various patterns
rg 'FromCommittedResource' --type=go -A 2 -B 2
echo "---"
echo "Search for any references to the function (without parentheses):"
rg 'FromCommittedResource' --type=goRepository: cobaltcore-dev/cortex
Length of output: 1214
🏁 Script executed:
# Search for patterns where CommittedResource is processed
rg 'CommittedResource' --type=go | grep -E '(func|switch|if|case).*Spec\.State|State.*:=' | head -20Repository: cobaltcore-dev/cortex
Length of output: 47
🏁 Script executed:
# Search for CommitmentState usage (the return type of FromCommittedResource)
rg 'CommitmentState' --type=go | head -30Repository: cobaltcore-dev/cortex
Length of output: 4280
🏁 Script executed:
# Look for controllers or reconcilers handling CommittedResource
rg 'CommittedResource' --type=go -l | xargs -I {} grep -l 'controller\|reconcil' {} 2>/dev/nullRepository: cobaltcore-dev/cortex
Length of output: 1358
🏁 Script executed:
# Check for TODO, FIXME, or WIP markers related to commitments
rg -i '(TODO|FIXME|WIP).*commitment|commitment.*(TODO|FIXME|WIP)' --type=goRepository: cobaltcore-dev/cortex
Length of output: 391
🏁 Script executed:
# Check syncer.go for FromCommittedResource usage and state filtering
cat -n internal/scheduling/reservations/commitments/syncer.go | head -150Repository: cobaltcore-dev/cortex
Length of output: 5977
🏁 Script executed:
# Check controller.go to see how CommittedResource is processed
rg -n 'FromCommittedResource|Spec\.State' internal/scheduling/reservations/commitments/controller.go -A 5 -B 5Repository: cobaltcore-dev/cortex
Length of output: 47
🏁 Script executed:
# Search for CommittedResource CRD creation and usage in controller
grep -n 'CommittedResource' internal/scheduling/reservations/commitments/controller.go | head -20Repository: cobaltcore-dev/cortex
Length of output: 1481
🏁 Script executed:
# Search for where CommittedResource objects flow
rg 'v1alpha1\.CommittedResource|CommittedResource\{' --type=go -B 3 -A 3 | head -80Repository: cobaltcore-dev/cortex
Length of output: 6861
🏁 Script executed:
# Check if FromCommittedResource might be called indirectly or is simply unused
rg '&CommitmentState\{|CommitmentState\{' --type=go | grep -v test | head -20Repository: cobaltcore-dev/cortex
Length of output: 601
🏁 Script executed:
# Search for any registration of FromCommittedResource as a function pointer
rg 'FromCommittedResource' --type=go
# Check the entire state.go file for context on FromCommittedResource
wc -l internal/scheduling/reservations/commitments/state.go
# Look for any interface definitions or function type assignments
rg 'func\s*\(' --type=go internal/scheduling/reservations/commitments/state.go | grep -A 1 -B 1 'CommittedResource'Repository: cobaltcore-dev/cortex
Length of output: 393
🏁 Script executed:
# Check if there's a controller.go that reconciles CommittedResource objects
grep -n 'CommittedResource' internal/scheduling/reservations/commitments/controller.go | head -5
# Look at the full picture: where are CommittedResource objects coming from?
rg 'watch.*CommittedResource|For.*CommittedResource' --type=goRepository: cobaltcore-dev/cortex
Length of output: 1174
Add state filtering to FromCommittedResource before function is called.
The function FromCommittedResource at internal/scheduling/reservations/commitments/state.go:212 currently converts any CommittedResource to CommitmentState without validating the Spec.State field. The API specification clearly requires that only guaranteed and confirmed commitments create active Reservation slots. Although this function has no current call sites, it should enforce this constraint to prevent misuse when integrated.
Add a guard to reject planned, pending, superseded, and expired states:
if cr.Spec.State != v1alpha1.CommitmentStatusGuaranteed && cr.Spec.State != v1alpha1.CommitmentStatusConfirmed {
return nil, errors.New("only guaranteed and confirmed commitments are supported")
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@api/v1alpha1/committed_resource_types.go` around lines 88 - 91, Add a state
guard inside the FromCommittedResource function to reject non-active commitment
states: check the incoming CommittedResource's cr.Spec.State and return an error
unless it equals v1alpha1.CommitmentStatusGuaranteed or
v1alpha1.CommitmentStatusConfirmed; this ensures FromCommittedResource only
converts guaranteed/confirmed commitments into CommitmentState and rejects
planned, pending, superseded, and expired values.
Test Coverage ReportTest Coverage 📊: 70.0% |
…tion (#758) ## Summary - **Bug**: In `reconcileAllocations`, when stale VM allocations are removed from the spec (specChanged=true), the status patch silently becomes a no-op. This happens because `old = res.DeepCopy()` was called AFTER `res.Status.CommittedResourceReservation.Allocations = newStatusAllocations`, making `old` and `res` identical for the status fields. The subsequent `client.MergeFrom(old)` patch contained no status diff. - **Found by**: Weekly automated review of recent changes (#726, #730 refactors) - **Fix**: Move the `res.DeepCopy()` call before the status re-application so the merge-from base correctly reflects the pre-update state, producing the intended status diff. ## Impact When a committed-resource reservation has stale allocations removed (VMs that left the hypervisor), the status.allocations field was not being updated to reflect verified allocations. This could cause the usage reconciler to work with stale allocation data and prevent accurate tracking of committed resource usage. ## Test plan - [x] Existing `TestReconcileAllocations_HypervisorCRDPath` tests pass - [x] All commitments package tests pass - [ ] Verify in staging that status.allocations is updated when stale allocations are removed from spec 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-authored-by: Claude Bug Detective <claude@anthropic.com> Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
Introduces CommittedResource CRD for commitments of various resource types (memory, cpu, ...). This CRD acts as the API how to configure commitments in Cortex.
Related: https://github.wdf.sap.corp/cc/secrets/pull/20201