Original source: https://input.scs.community/findings-scs-compliant-yaook#; written by Friedrich Zahn.
How to get from a Quickstart Yaook install to an SCS-IaaS compliant cloud¶
Yaook Quickstart: https://docs.yaook.cloud/user/guides/quickstart-guide/index.html
It is currently the only human-readable instruction set on how to install Yaook, so potential customers are likely to use it as their starting point to explore whether Yaook can do what they need.
Covered by the Quickstart¶
Infrastructure provided:
- Ceph
OS services installed:
- Keystone (identity service)
- Glance (image service)
- Neutron (networking)
- Nova (compute service)
- Horizon (dashboard)
Missing for SCS-IaaS v5.1 compliance¶
Services as defined in SCS-0123-v1¶
| Mandatory API | corresponding OpenStack Service | description |
|---|---|---|
| block-storage | Cinder | Block Storage service |
| load-balancer | Octavia | Load-balancer service |
| s3 | S3 API object storage | Object Storage service |
Resolution¶
Cinder¶
Working solution manifests currently deployed in yaook in f1a: https://gitlab.com/gerbsen/yaook-in-a-box/-/merge_requests/3
There are some manifests to start from:
https://docs.yaook.cloud/user/explanations/examples/cinder.html
Octavia¶
There is
- https://docs.yaook.cloud/user/guides/octavia-operator.html
- used this and the dd8 OctaviaDeployment for a relatively straight-forward install of the Octavia service
- db & mq need a storage class name to bind PVCs, stuck replicasets and PVCs have to be manually deleted
- cds operator must be installed! ↦ https://gitlab.com/yaook/operator/-/merge_requests/3159
- https://gitlab.com/uhurutec/stack/yaook-dev-tools/-/tree/main/src/yaook_dev_tools/install_octavia?ref_type=heads
- partially tested, hard to make work when OpenStack APIs are not easily accessible (tries to use keystone.yaook.cloud)
- requires OctaviaDeployment to be present
s3¶
Swift is not supported by Yaook, but Rook has Swift emulation build in. However, it is not trivial to set up, see
HowTo Rook Swift+S3 emulation.md
Volume Backups¶
Backing up volumes that live in Ceph never worked in Yaook so far, due to missing credentials in the backup service pod: https://gitlab.com/yaook/operator/-/merge_requests/3145
S3 Backend¶
Create a bucket somewhere accessible from a Yaook pod, put the credentials into a Secret as described below.
“Working” CinderDeployment.spec.backups:
1backup:
2 cindervolumebackup:
3 cinderConfig:
4 DEFAULT:
5 backup_driver: cinder.backup.drivers.s3.S3BackupDriver
6 backup_s3_endpoint_url: https://rook-ceph-rgw-s3-service.rook-ceph.svc
7 backup_s3_verify_ssl: false
8 oslo_messaging_rabbit:
9 heartbeat_in_pthread: false
10 heartbeat_rate: 3
11 rabbit_stream_fanout: true
12 rabbit_transient_quorum_queue: true
13 cinderSecrets:
14 - items:
15 - key: backup_s3_store_bucket
16 path: /DEFAULT/backup_s3_store_bucket
17 - key: backup_s3_store_access_key
18 path: /DEFAULT/backup_s3_store_access_key
19 - key: backup_s3_store_secret_key
20 path: /DEFAULT/backup_s3_store_secret_key
21 secretName: cinder-volume-backup
22 replicas: 1
23 scheduleRuleWhenUnsatisfiable: ScheduleAnyway
24 terminationGracePeriod: 3600
1apiVersion: v1
2kind: Secret
3metadata:
4 name: cinder-volume-backup
5 namespace: yaook
6type: Opaque
7data:
8 backup_s3_store_bucket: <bucket in b64>
9 backup_s3_store_access_key: <access key in b64>
10 backup_s3_store_secret_key: <secret key in b64>
Together with the fix from !3145 above this gets the backups into the state “creating”, without anything crashing. However, currently the backups are stuck like that.
Flavors as defined in SCS-0103-v1:¶
| Mandatory name | vCPUs | vCPU type | RAM [GiB] | Root disk [GB] | Disk type |
|---|---|---|---|---|---|
| SCS-1V-4 | 1 | shared-core | 4 | ||
| SCS-2V-8 | 2 | shared-core | 8 | ||
| SCS-4V-16 | 4 | shared-core | 16 | ||
| SCS-4V-16-100s | 4 | shared-core | 16 | 100 | ssd |
| SCS-8V-32 | 8 | shared-core | 32 | ||
| SCS-1V-2 | 1 | shared-core | 2 | ||
| SCS-2V-4 | 2 | shared-core | 4 | ||
| SCS-2V-4-20s | 2 | shared-core | 4 | 20 | ssd |
| SCS-4V-8 | 4 | shared-core | 8 | ||
| SCS-8V-16 | 8 | shared-core | 16 | ||
| SCS-16V-32 | 16 | shared-core | 32 | ||
| SCS-1V-8 | 1 | shared-core | 8 | ||
| SCS-2V-16 | 2 | shared-core | 16 | ||
| SCS-4V-32 | 4 | shared-core | 32 | ||
| SCS-1L-1 | 1 | crowded-core | 1 |
The ...s Flavors carry the implied requirement to provide host-local SSD storage to instances, see also https://docs.scs.community/standards/scs-0110-v1-ssd-flavors.
Resolution¶
flavor creation¶
openstack-flavor-manager (from within yaookctl openstack shell) runs without any apparent issues
1yaookctl openstack shell --image registry.gitlab.com/toothstone/yaookctl-scs-check
1python3 env-to-clouds.py && pip install openstack-flavor-manager && openstack-flavor-manager --cloud=yaook
local SSD storage flavors (SCS-0110-v1)¶
nova-compute is by default able to provide local, ephemeral root disks.
To group hypervisors with SSDs and scheduling the SSD-flavors only there, host aggregates can be used: https://docs.openstack.org/nova/latest/admin/aggregates.html
It might be helpful to provide some additional tooling to manage the hypervisor - aggregate - flavor relationships. openstack-flavor-manager gives the flavors the extra spec scs:disk0=ssd which could be useful for automated management of these flavors.
Images as defined in SCS-0104-v1¶
1images:
2# mandatory
3- name: "Ubuntu 24.04"
4 source:
5 - https://cloud-images.ubuntu.com/releases/noble/
6 - https://cloud-images.ubuntu.com/noble/
7 status: mandatory
Resolution¶
openstack-image-manager can be used, but https://github.com/SovereignCloudStack/standards/blob/do-not-merge/scs-compliant-yaook/Informational/openstack/images.yaml is outdated.
1---
2images:
3 - name: Ubuntu
4 enable: true
5 format: qcow2
6 login: ubuntu
7 password: ubuntu
8 status: active
9 visibility: public
10 multi: false
11 min_disk: 8
12 min_ram: 512
13 tags: []
14 meta:
15 architecture: x86_64
16 hypervisor_type: qemu
17 hw_disk_bus: scsi
18 hw_rng_model: virtio
19 hw_scsi_model: virtio-scsi
20 hw_watchdog_action: reset
21 os_distro: ubuntu
22 os_purpose: generic
23 replace_frequency: never
24 uuid_validity: none
25 provided_until: none
26 versions:
27 - version: '24.04'
28 os_version: '24.04'
29 image_description: "https://cloud-images.ubuntu.com/releases/noble/release/"
30 url: https://cloud-images.ubuntu.com/releases/noble/release/ubuntu-24.04-server-cloudimg-amd64.img
31 build_date: 2025-08-17
Domain Manager access as defined in SCS-0302-v1¶
Has to be added manually at least until Yaook supports OS 2024.2 or higher.
Resolution¶
I added the following to the KeystoneDeployment.spec (from Max DD8 manifest):
1 policy:
2 # SCS Domain Manager policy configuration
3
4 # Section A: OpenStack base definitions
5 # The entries beginning with "base_<rule>" should be exact copies of the
6 # default "identity:<rule>" definitions for the target OpenStack release.
7 # They will be extended upon for the manager role below this section.
8 "base_get_domain": "(role:reader and system_scope:all) or token.domain.id:%(target.domain.id)s or token.project.domain.id:%(target.domain.id)s"
9 "base_list_domains": "(role:reader and system_scope:all)"
10 "base_list_roles": "(role:reader and system_scope:all)"
11 "base_get_role": "(role:reader and system_scope:all)"
12 "base_list_users": "(role:reader and system_scope:all) or (role:reader and domain_id:%(target.domain_id)s)"
13 "base_get_user": "(role:reader and system_scope:all) or (role:reader and token.domain.id:%(target.user.domain_id)s) or user_id:%(target.user.id)s"
14 "base_create_user": "(role:admin and system_scope:all) or (role:admin and token.domain.id:%(target.user.domain_id)s)"
15 "base_update_user": "(role:admin and system_scope:all) or (role:admin and token.domain.id:%(target.user.domain_id)s)"
16 "base_delete_user": "(role:admin and system_scope:all) or (role:admin and token.domain.id:%(target.user.domain_id)s)"
17 "base_list_projects": "(role:reader and system_scope:all) or (role:reader and domain_id:%(target.domain_id)s)"
18 "base_get_project": "(role:reader and system_scope:all) or (role:reader and domain_id:%(target.project.domain_id)s) or project_id:%(target.project.id)s"
19 "base_create_project": "(role:admin and system_scope:all) or (role:admin and domain_id:%(target.project.domain_id)s)"
20 "base_update_project": "(role:admin and system_scope:all) or (role:admin and domain_id:%(target.project.domain_id)s)"
21 "base_delete_project": "(role:admin and system_scope:all) or (role:admin and domain_id:%(target.project.domain_id)s)"
22 "base_list_user_projects": "(role:reader and system_scope:all) or (role:reader and domain_id:%(target.user.domain_id)s) or user_id:%(target.user.id)s"
23 "base_check_grant": "(role:reader and system_scope:all) or ((role:reader and domain_id:%(target.user.domain_id)s and domain_id:%(target.project.domain_id)s) or (role:reader and domain_id:%(target.user.domain_id)s and domain_id:%(target.domain.id)s) or (role:reader and domain_id:%(target.group.domain_id)s and domain_id:%(target.project.domain_id)s) or (role:reader and domain_id:%(target.group.domain_id)s and domain_id:%(target.domain.id)s)) and (domain_id:%(target.role.domain_id)s or None:%(target.role.domain_id)s)"
24 "base_list_grants": "(role:reader and system_scope:all) or (role:reader and domain_id:%(target.user.domain_id)s and domain_id:%(target.project.domain_id)s) or (role:reader and domain_id:%(target.user.domain_id)s and domain_id:%(target.domain.id)s) or (role:reader and domain_id:%(target.group.domain_id)s and domain_id:%(target.project.domain_id)s) or (role:reader and domain_id:%(target.group.domain_id)s and domain_id:%(target.domain.id)s)"
25 "base_create_grant": "(role:admin and system_scope:all) or ((role:admin and domain_id:%(target.user.domain_id)s and domain_id:%(target.project.domain_id)s) or (role:admin and domain_id:%(target.user.domain_id)s and domain_id:%(target.domain.id)s) or (role:admin and domain_id:%(target.group.domain_id)s and domain_id:%(target.project.domain_id)s) or (role:admin and domain_id:%(target.group.domain_id)s and domain_id:%(target.domain.id)s)) and (domain_id:%(target.role.domain_id)s or None:%(target.role.domain_id)s)"
26 "base_revoke_grant": "(role:admin and system_scope:all) or ((role:admin and domain_id:%(target.user.domain_id)s and domain_id:%(target.project.domain_id)s) or (role:admin and domain_id:%(target.user.domain_id)s and domain_id:%(target.domain.id)s) or (role:admin and domain_id:%(target.group.domain_id)s and domain_id:%(target.project.domain_id)s) or (role:admin and domain_id:%(target.group.domain_id)s and domain_id:%(target.domain.id)s)) and (domain_id:%(target.role.domain_id)s or None:%(target.role.domain_id)s)"
27 "base_list_role_assignments": "(role:reader and system_scope:all) or (role:reader and domain_id:%(target.domain_id)s)"
28 "base_list_groups": "(role:reader and system_scope:all) or (role:reader and domain_id:%(target.group.domain_id)s)"
29 "base_get_group": "(role:reader and system_scope:all) or (role:reader and domain_id:%(target.group.domain_id)s)"
30 "base_create_group": "(role:admin and system_scope:all) or (role:admin and domain_id:%(target.group.domain_id)s)"
31 "base_update_group": "(role:admin and system_scope:all) or (role:admin and domain_id:%(target.group.domain_id)s)"
32 "base_delete_group": "(role:admin and system_scope:all) or (role:admin and domain_id:%(target.group.domain_id)s)"
33 "base_list_groups_for_user": "(role:reader and system_scope:all) or (role:reader and domain_id:%(target.user.domain_id)s) or user_id:%(user_id)s"
34 "base_list_users_in_group": "(role:reader and system_scope:all) or (role:reader and domain_id:%(target.group.domain_id)s)"
35 "base_remove_user_from_group": "(role:admin and system_scope:all) or (role:admin and domain_id:%(target.group.domain_id)s and domain_id:%(target.user.domain_id)s)"
36 "base_check_user_in_group": "(role:reader and system_scope:all) or (role:reader and domain_id:%(target.group.domain_id)s and domain_id:%(target.user.domain_id)s)"
37 "base_add_user_to_group": "(role:admin and system_scope:all) or (role:admin and domain_id:%(target.group.domain_id)s and domain_id:%(target.user.domain_id)s)"
38
39 # Section B: Domain Manager Extensions
40
41 # classify domain managers with a special role
42 "is_domain_manager": "role:manager"
43
44 # specify a rule that whitelists roles which domain admins are permitted
45 # to assign and revoke within their domain
46 "is_domain_managed_role": "'member':%(target.role.name)s or 'load-balancer_member':%(target.role.name)s or 'creator':%(target.role.name)s or 'reader':%(target.role.name)s"
47
48 # allow domain admins to retrieve their own domain (does not need changes)
49 "identity:get_domain": "rule:base_get_domain or rule:admin_required"
50
51 # list_domains is needed for GET /v3/domains?name=... requests
52 # this is mandatory for things like
53 # `create user --domain $DOMAIN_NAME $USER_NAME` to correctly discover
54 # domains by name
55 "identity:list_domains": "rule:is_domain_manager or rule:base_list_domains or rule:admin_required"
56
57 # list_roles is needed for GET /v3/roles?name=... requests
58 # this is mandatory for things like `role add ... $ROLE_NAME`` to correctly
59 # discover roles by name
60 "identity:list_roles": "rule:is_domain_manager or rule:base_list_roles or rule:admin_required"
61
62 # get_role is needed for GET /v3/roles/{role_id} requests
63 # this is mandatory for the OpenStack SDK to properly process role assignments
64 # which are issued by role id instead of name
65 "identity:get_role": "(rule:is_domain_manager and rule:is_domain_managed_role) or rule:base_get_role or rule:admin_required"
66
67 # allow domain admins to manage users within their domain
68 "identity:list_users": "(rule:is_domain_manager and token.domain.id:%(target.domain_id)s) or rule:base_list_users or rule:admin_required"
69 "identity:get_user": "(rule:is_domain_manager and token.domain.id:%(target.user.domain_id)s) or rule:base_get_user or rule:admin_required"
70 "identity:create_user": "(rule:is_domain_manager and token.domain.id:%(target.user.domain_id)s) or rule:base_create_user or rule:admin_required"
71 "identity:update_user": "(rule:is_domain_manager and token.domain.id:%(target.user.domain_id)s) or rule:base_update_user or rule:admin_required"
72 "identity:delete_user": "(rule:is_domain_manager and token.domain.id:%(target.user.domain_id)s) or rule:base_delete_user or rule:admin_required"
73
74 # allow domain admins to manage projects within their domain
75 "identity:list_projects": "(rule:is_domain_manager and token.domain.id:%(target.domain_id)s) or rule:base_list_projects or rule:admin_required"
76 "identity:get_project": "(rule:is_domain_manager and token.domain.id:%(target.project.domain_id)s) or rule:base_get_project or rule:admin_required"
77 "identity:create_project": "(rule:is_domain_manager and token.domain.id:%(target.project.domain_id)s) or rule:base_create_project or rule:admin_required"
78 "identity:update_project": "(rule:is_domain_manager and token.domain.id:%(target.project.domain_id)s) or rule:base_update_project or rule:admin_required"
79 "identity:delete_project": "(rule:is_domain_manager and token.domain.id:%(target.project.domain_id)s) or rule:base_delete_project or rule:admin_required"
80 "identity:list_user_projects": "(rule:is_domain_manager and token.domain.id:%(target.user.domain_id)s) or rule:base_list_user_projects or rule:admin_required"
81
82 # allow domain managers to manage role assignments within their domain
83 # (restricted to specific roles by the 'is_domain_managed_role' rule)
84 #
85 # project-level role assignment to user within domain
86 "is_domain_user_project_grant": "token.domain.id:%(target.user.domain_id)s and token.domain.id:%(target.project.domain_id)s"
87 # project-level role assignment to group within domain
88 "is_domain_group_project_grant": "token.domain.id:%(target.group.domain_id)s and token.domain.id:%(target.project.domain_id)s"
89 # domain-level role assignment to group
90 "is_domain_level_group_grant": "token.domain.id:%(target.group.domain_id)s and token.domain.id:%(target.domain.id)s"
91 # domain-level role assignment to user
92 "is_domain_level_user_grant": "token.domain.id:%(target.user.domain_id)s and token.domain.id:%(target.domain.id)s"
93 "domain_manager_grant": "rule:is_domain_manager and (rule:is_domain_user_project_grant or rule:is_domain_group_project_grant or rule:is_domain_level_group_grant or rule:is_domain_level_user_grant)"
94 "identity:check_grant": "rule:domain_manager_grant or rule:base_check_grant or rule:admin_required"
95 "identity:list_grants": "rule:domain_manager_grant or rule:base_list_grants or rule:admin_required"
96 "identity:create_grant": "(rule:domain_manager_grant and rule:is_domain_managed_role) or rule:base_create_grant or rule:admin_required"
97 "identity:revoke_grant": "(rule:domain_manager_grant and rule:is_domain_managed_role) or rule:base_revoke_grant or rule:admin_required"
98 "identity:list_role_assignments": "(rule:is_domain_manager and token.domain.id:%(target.domain_id)s) or rule:base_list_role_assignments or rule:admin_required"
99
100 # allow domain managers to manage groups within their domain
101 "identity:list_groups": "(rule:is_domain_manager and token.domain.id:%(target.group.domain_id)s) or (role:reader and system_scope:all) or rule:base_list_groups or rule:admin_required"
102 "identity:get_group": "(rule:is_domain_manager and token.domain.id:%(target.group.domain_id)s) or (role:reader and system_scope:all) or rule:base_get_group or rule:admin_required"
103 "identity:create_group": "(rule:is_domain_manager and token.domain.id:%(target.group.domain_id)s) or rule:base_create_group or rule:admin_required"
104 "identity:update_group": "(rule:is_domain_manager and token.domain.id:%(target.group.domain_id)s) or rule:base_update_group or rule:admin_required"
105 "identity:delete_group": "(rule:is_domain_manager and token.domain.id:%(target.group.domain_id)s) or rule:base_delete_group or rule:admin_required"
106 "identity:list_groups_for_user": "(rule:is_domain_manager and token.domain.id:%(target.user.domain_id)s) or rule:base_list_groups_for_user or rule:admin_required"
107 "identity:list_users_in_group": "(rule:is_domain_manager and token.domain.id:%(target.group.domain_id)s) or rule:base_list_users_in_group or rule:admin_required"
108 "identity:remove_user_from_group": "(rule:is_domain_manager and token.domain.id:%(target.group.domain_id)s and token.domain.id:%(target.user.domain_id)s) or rule:base_remove_user_from_group or rule:admin_required"
109 "identity:check_user_in_group": "(rule:is_domain_manager and token.domain.id:%(target.group.domain_id)s and token.domain.id:%(target.user.domain_id)s) or rule:base_check_user_in_group or rule:admin_required"
110 "identity:add_user_to_group": "(rule:is_domain_manager and token.domain.id:%(target.group.domain_id)s and token.domain.id:%(target.user.domain_id)s) or rule:base_add_user_to_group or rule:admin_required"
So far I’ve not been able to create a verifiably working domain manager user since openstackclient is as always crappy and underdocumented.
Passing the OpenStack Powered Compute test suite 2022.11¶
There are various known issues here, the OS "interop" working group responsible for the test selection and tooling ("refstack") has been dissolved in 2023 and there is no maintenance or support. Two of the defined tests can't be passed with a reasonable setup.
refstack-client¶
Can't be installed via the only documented setup_env script due to dependency conflicts.
OSISM ansible role¶
:::warning This is what I did under the (wrong) assumption that OSISM included something refstack-specific in their tempest setup, which they don't.
If you have any other setup/approach for running tempest, do it that way! E.g. via the Yaook Tempest Operator, which by now supports loading a list of tests.
:::
Very picky regarding the host to run on:
- needs user
dragonin groupdragon, both with ID 45000 - installed docker engine and
dragonin groupdocker dragonneeds (passwordless) sudo access, e.g. via groupadminwith our ALASCA base ubuntu play
It expects to run with admin rights, so I had to cut out parts to avoid forbidden OS operations (e.g. endpoint list). Unclear what implications that has, I supplied the image file manually and removed the network extension API reference.
It uses Tempests dynamic credentials mode, i.e. you hand it admin credentials and it creates users for its tests. I switched it successfully to static credentials on f1a. AFAIK SCS can't expect CSPs to hand them admin credentials, so this would need to be adapted in general.
https://gitlab.com/alasca-focis/ansible-collection-validations has a fork with my changes, adapt the credentials, flavors and images to the cloud to test. accounts.yml is not copied automatically, you need to create it on the runner at /opt/tempest/accounts.yml
Also you need to put your SSH pub key into /home/dragon/.ssh/authorized_keys
Might be runnable with something like
ansible-playbook -i runners.yml -l tempest-runner-[f1a|dd8] run.yml
Parsing the refstack suite definition into Tempest regexes¶
To generate a include.lst of regexes that match the refstack list of tests to tempest tests, I wrote this Python script:
1import json
2import re
3
4if __name__ == "__main__":
5 with open('2022.11.json', 'r') as f:
6 suite = json.load(f)
7
8 tempest_regexes = set()
9
10 required_tests = suite['components']['os_powered_compute']['capabilities']['required']
11
12 for capability in suite['capabilities']:
13 if capability in required_tests and not suite['capabilities'][capability]['admin']:
14 for test in suite['capabilities'][capability]['tests']:
15 tempest_regexes.add(test)
16
17 tempest_regexes = list(tempest_regexes)
18 tempest_regexes.sort()
19
20 with open('include.lst', 'w') as f:
21 [f.write('^' + re.escape(r) + '\[\n') for r in tempest_regexes]
It assumes that 'admin': true (see schema docs) tests are not to be included, although there is no SCS documentation on that. Since these tests are manually run by CSPs applying for certification, they should technically be able to run them with admin credentials.
I got 260 distinct tests with the above script, there are 16 more which require admin rights.
See also: https://github.com/scaleup-technologies/scs-refstack-tempest-fixes/blob/main/test-list.txt - should be the same (including admin: true tests) list excluding one problematic domain-related test - has 275 entries.
Yaook Tempest Operator patch¶
I wrote some patches for the Yaook Tempest Operator and its CRDs so that it can load a list of tests that was previously generated by tempest via tempest run --list-tests
MR is at https://gitlab.com/yaook/operator/-/merge_requests/3216
The MR also has an example manifest for a TempestJob that loads the ~275 test defined by OpenStack Powered Compute. I hope that SCS will adopt this format going forward.
Test results¶
Due to a lack of usable OpenStack instances (on f1a & dd8 I have only one user and no K8s access, northern-light has no public APIs and is lacking a lot of configuration and features), no conclusive findings so far.
Misc findings¶
- SCS-Tests are all "aborted" without a helpful error message when there are problems with the OS connection/authentication/clouds.yaml
- ↦ https://github.com/SovereignCloudStack/standards/issues/991
- standard APIs test is aborted when no block storage is found ↦ should fail?!
- need to be verified, might have been object store related actually
key-manager-check is aborted ↦ should pass when there is no key manager- was due to wrong permissions on test user
- s3 needs to be registered as "object-store" in catalog to be recognized by SCS-Tests
- ↦ https://github.com/SovereignCloudStack/standards/issues/1004
- Yaook has no documentation on the supported OS services
- https://gitlab.com/yaook/operator/-/blob/devel/yaook/assets/pinned_version.yml?ref_type=heads is probably where to look, but a new user won't find that
- the quickstart installs the local-path PV provisioner with a single replica, so only that worker node will be used for binding PVCs and all pods with PVs will be scheduled there - this can quickly lead to disk + memory pressure on 50 GB + 8 GiB nodes
- storageClassName: local-path has to be specified for db and message queue infra services, otherwise the PVC won't bind
- DD8 manifests don't have this, maybe we should add a default to the quickstart?
- openstackclient caches the catalog! Do not use interactive mode when editing services or endpoints.