Skip to content

Comments

Add dcn_storage scenario for DCN adoption with Ceph HCI#3697

Open
fultonj wants to merge 1 commit intoopenstack-k8s-operators:mainfrom
fultonj:dcn_adoption_hci
Open

Add dcn_storage scenario for DCN adoption with Ceph HCI#3697
fultonj wants to merge 1 commit intoopenstack-k8s-operators:mainfrom
fultonj:dcn_adoption_hci

Conversation

@fultonj
Copy link
Contributor

@fultonj fultonj commented Feb 18, 2026

Add VM layout for DCN deployment with 3 Ceph clusters.

Changes from dcn_nostorage:

  • Increase extra_disks_size from 15G to 30G for Ceph OSDs
  • All 9 compute nodes (central, dcn1, dcn2) configured for HCI
  • Stack name is passed to adoption ceph playbook
  • Fix os-net-config to use stack-specific ctlplane network (ctlplanedcn1/ctlplanedcn2) instead of hardcoded 'ctlplane'
  • Fix tripleo-ansible-inventory template computes group condition to match DCN group names like 'osp-dcn1-compute-az1s' which contain 'compute' but not 'computes'
  • Add adoption_dcn_export.yml playbook to export central stack for DCN site deployments
  • Add adoption_dcn_update_central.yml playbook to update central glance to use ceph at the DCN sites.

This scenario pairs with data-plane-adoption's dcn_storage.yaml to deploy OSP 17.1 with Ceph at central and edge sites.

Related: openstack-k8s-operators/data-plane-adoption#1245

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 18, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign amartyasinha for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@fultonj fultonj marked this pull request as draft February 18, 2026 22:38
@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/9b385574906f4b3da67d3dd8b1162f3c

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 58m 58s
✔️ podified-multinode-edpm-deployment-crc SUCCESS in 1h 21m 34s
✔️ cifmw-crc-podified-edpm-baremetal SUCCESS in 1h 30m 06s
✔️ cifmw-crc-podified-edpm-baremetal-minor-update SUCCESS in 1h 45m 47s
✔️ cifmw-pod-zuul-files SUCCESS in 4m 28s
✔️ noop SUCCESS in 0s
✔️ cifmw-pod-ansible-test SUCCESS in 8m 43s
cifmw-pod-pre-commit FAILURE in 7m 49s
✔️ cifmw-molecule-adoption_osp_deploy SUCCESS in 3m 26s

@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/3e8db1cb29c840358716a3d81671f6e2

✔️ openstack-k8s-operators-content-provider SUCCESS in 2h 09m 32s
✔️ podified-multinode-edpm-deployment-crc SUCCESS in 1h 23m 08s
✔️ cifmw-crc-podified-edpm-baremetal SUCCESS in 1h 43m 06s
✔️ cifmw-crc-podified-edpm-baremetal-minor-update SUCCESS in 1h 56m 00s
✔️ cifmw-pod-zuul-files SUCCESS in 4m 36s
✔️ noop SUCCESS in 0s
✔️ cifmw-pod-ansible-test SUCCESS in 8m 42s
cifmw-pod-pre-commit FAILURE in 8m 06s
✔️ cifmw-molecule-adoption_osp_deploy SUCCESS in 3m 23s

@fultonj fultonj force-pushed the dcn_adoption_hci branch 2 times, most recently from a079ff6 to 0c8c064 Compare February 23, 2026 23:18
@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/6a3d5961addf4f318432d51aaac1373a

✔️ openstack-k8s-operators-content-provider SUCCESS in 2h 21m 57s
✔️ podified-multinode-edpm-deployment-crc SUCCESS in 1h 20m 02s
✔️ cifmw-crc-podified-edpm-baremetal SUCCESS in 1h 36m 21s
cifmw-crc-podified-edpm-baremetal-minor-update FAILURE in 2h 08m 52s
✔️ cifmw-pod-zuul-files SUCCESS in 5m 26s
✔️ noop SUCCESS in 0s
✔️ cifmw-pod-ansible-test SUCCESS in 9m 40s
✔️ cifmw-pod-pre-commit SUCCESS in 8m 48s
✔️ cifmw-molecule-adoption_osp_deploy SUCCESS in 3m 40s

@fultonj fultonj marked this pull request as ready for review February 24, 2026 14:59
@fultonj fultonj requested a review from fmount February 24, 2026 15:22
- name: Check glance config status on each controller
delegate_to: "{{ controller }}"
become: true
ansible.builtin.shell: |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure there's a specific reason to use this shell task, but wouldn't be better to use ansible.builtin.stat and collect the nodes based on the stat.exists output (processed by L152)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I'll replace it with stat in my next patch set. This command used to grep the conf file to check if it was already updated with the desired glance config so NEEDS_UPDATE used to have more meaning than just file existence. Since that's gone this can now be simplified.

ansible.builtin.shell:
cmd: |
set -o pipefail
grep -E '^enabled_backends\s*=' "{{ _glance_config_file }}" | head -1 | sed 's/enabled_backends\s*=\s*//'
Copy link
Contributor

@fmount fmount Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not blocking, but in case you have time, consider consolidate grep + head + sed into a single command (e.g. based on awk).
This is really a nice to have that you can ignore.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call. I put the following in my next patch set.

ansible.builtin.command:
  cmd: awk -F'\s*=\s*' '/^enabled_backends\s*=/{print $2; exit}' "{{ _glance_config_file }}"

…Ceph HCI

Add VM layout for DCN deployment with 3 Ceph clusters.

Changes from dcn_nostorage:
- Increase extra_disks_size from 15G to 30G for Ceph OSDs
- All 9 compute nodes (central, dcn1, dcn2) configured for HCI
- Stack name is passed to adoption ceph playbook
- Fix os-net-config to use stack-specific ctlplane network
  (ctlplanedcn1/ctlplanedcn2) instead of hardcoded 'ctlplane'
- Fix tripleo-ansible-inventory template computes group condition
  to match DCN group names like 'osp-dcn1-compute-az1s' which
  contain 'compute' but not 'computes'
- Add adoption_dcn_export.yml playbook to export central stack
  for DCN site deployments
- Add adoption_dcn_update_central.yml playbook to update central
  glance to use ceph at the DCN sites.

This scenario pairs with data-plane-adoption's dcn_storage.yaml
to deploy OSP 17.1 with Ceph at central and edge sites.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: John Fulton <fulton@redhat.com>
@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/e6be698cf271445c919df9abf3b6a0ce

✔️ openstack-k8s-operators-content-provider SUCCESS in 27m 53s
podified-multinode-edpm-deployment-crc RETRY_LIMIT in 11m 14s
cifmw-crc-podified-edpm-baremetal RETRY_LIMIT in 14m 27s
cifmw-crc-podified-edpm-baremetal-minor-update RETRY_LIMIT in 14m 25s
✔️ cifmw-pod-zuul-files SUCCESS in 4m 39s
✔️ noop SUCCESS in 0s
✔️ cifmw-pod-ansible-test SUCCESS in 9m 24s
✔️ cifmw-pod-pre-commit SUCCESS in 8m 05s
✔️ cifmw-molecule-adoption_osp_deploy SUCCESS in 3m 27s

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants