Skip to content

Commit 7598716

Browse files
Ironic deployment guide documentation
1 parent 0493247 commit 7598716

File tree

3 files changed

+314
-11
lines changed

3 files changed

+314
-11
lines changed

doc/source/configuration/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@ the various features provided.
1111
walled-garden
1212
release-train
1313
host-images
14+
ironic
1415
lvm
1516
swap
1617
cephadm

doc/source/configuration/ironic.rst

Lines changed: 313 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,313 @@
1+
======
2+
Ironic
3+
======
4+
5+
Ironic networking
6+
=================
7+
8+
Ironic will require the workload provisioning and cleaning networks to be
9+
configured in ``networks.yml``
10+
11+
The workload provisioning network will require an allocation pool for
12+
Ironic Inspection and for Neutron. The Inspector allocation pool will be
13+
used to define static addresses for baremetal nodes during inspection and
14+
the Neutron allocation pool is used to assign addresses dynamically during
15+
baremetal provisioning.
16+
17+
.. code-block:: yaml
18+
19+
# Workload provisioning network IP information.
20+
provision_wl_net_cidr: "172.0.0.0/16"
21+
provision_wl_net_allocation_pool_start: "172.0.0.4"
22+
provision_wl_net_allocation_pool_end: "172.0.0.6"
23+
provision_wl_net_inspection_allocation_pool_start: "172.0.1.4"
24+
provision_wl_net_inspection_allocation_pool_end: "172.0.1.250"
25+
provision_wl_net_neutron_allocation_pool_start: "172.0.2.4"
26+
provision_wl_net_neutron_allocation_pool_end: "172.0.2.250"
27+
provision_wl_net_neutron_gateway: "172.0.1.1"
28+
29+
The cleaning network will also require a Neutron allocation pool.
30+
31+
.. code-block:: yaml
32+
33+
# Cleaning network IP information.
34+
cleaning_net_cidr: "172.1.0.0/16"
35+
cleaning_net_allocation_pool_start: "172.1.0.4"
36+
cleaning_net_allocation_pool_end: "172.1.0.6"
37+
cleaning_net_neutron_allocation_pool_start: "172.1.2.4"
38+
cleaning_net_neutron_allocation_pool_end: "172.1.2.250"
39+
cleaning_net_neutron_gateway: "172.1.0.1"
40+
41+
OpenStack Config
42+
================
43+
44+
Overcloud Ironic will be deployed with a listening TFTP server on the
45+
control plane which will provide baremetal nodes that PXE boot with the
46+
Ironic Python Agent (IPA) kernel and ramdisk. Since the TFTP server is
47+
listening exclusively on the internal API network it's neccessary for a
48+
route to exist between the provisoning/cleaning networks and the internal
49+
API network, we can achieve this is by defining a Neutron router using
50+
`OpenStack Config <https://github.com/stackhpc/openstack-config>`.
51+
52+
It not necessary to define the provision and cleaning networks in this
53+
configuration as they will be generated during
54+
55+
.. code-block:: console
56+
57+
kayobe overcloud post configure
58+
59+
The openstack config file could resemble the network, subnet and router
60+
configuration shown below:
61+
62+
.. code-block:: yaml
63+
64+
networks:
65+
- "{{ openstack_network_internal }}"
66+
67+
openstack_network_internal:
68+
name: "internal-net"
69+
project: "admin"
70+
provider_network_type: "vlan"
71+
provider_physical_network: "physnet1"
72+
provider_segmentation_id: 458
73+
shared: false
74+
external: true
75+
76+
subnets:
77+
- "{{ openstack_subnet_internal }}"
78+
79+
openstack_subnet_internal:
80+
name: "internal-net"
81+
project: "admin"
82+
cidr: "10.10.3.0/24"
83+
enable_dhcp: true
84+
allocation_pool_start: "10.10.3.3"
85+
allocation_pool_end: "10.10.3.3"
86+
87+
openstack_routers:
88+
- "{{ openstack_router_ironic }}"
89+
90+
openstack_router_ironic:
91+
- name: ironic
92+
project: admin
93+
interfaces:
94+
- net: "provision-net"
95+
subnet: "provision-net"
96+
portip: "172.0.1.1"
97+
- net: "cleaning-net"
98+
subnet: "cleaning-net"
99+
portip: "172.1.0.1"
100+
network: internal-net
101+
102+
To provision baremetal nodes in Nova you will also require setting a flavour
103+
specific to that type of baremetal host. You will need to replace the custom
104+
resource ``resources:CUSTOM_<YOUR_BAREMETAL_RESOURCE_CLASS>`` placeholder with
105+
the resource class of your baremetal hosts, you will also need this later when
106+
configuring the baremetal-compute inventory.
107+
108+
.. code-block:: yaml
109+
110+
openstack_flavors:
111+
- "{{ openstack_flavor_baremetal_A }}"
112+
# Bare metal compute node.
113+
openstack_flavor_baremetal_A:
114+
name: "baremetal-A"
115+
ram: 1048576
116+
disk: 480
117+
vcpus: 256
118+
extra_specs:
119+
"resources:CUSTOM_<YOUR_BAREMETAL_RESOURCE_CLASS>": 1
120+
"resources:VCPU": 0
121+
"resources:MEMORY_MB": 0
122+
"resources:DISK_GB": 0
123+
124+
Enabling conntrack
125+
==================
126+
127+
UEFI booting requires conntrack_helper to be configured on the Ironic neutron
128+
router, this is due to TFTP traffic being dropped due to being UDP. You will
129+
need to define some extension drivers in ``neutron.yml`` to ensure conntrack is
130+
enabled in neutron server.
131+
132+
.. code-block:: yaml
133+
134+
kolla_neutron_ml2_extension_drivers:
135+
port_security
136+
conntrack_helper
137+
dns_domain_ports
138+
139+
The neutron l3 agent also requires conntrack to be set as an extension in
140+
``kolla/config/neutron/l3_agent.ini``
141+
142+
.. code-block:: ini
143+
144+
[agent]
145+
extensions = conntrack_helper
146+
147+
It is also required to load the conntrack kernel module ``nf_nat_tftp``,
148+
``nf_conntrack`` and ``nf_conntrack_tftp`` on network nodes. You can load these
149+
modules using modprobe or define these in /etc/module-load.
150+
151+
The Ironic neutron router will also need to be configured to use
152+
conntrack_helper.
153+
154+
.. code-block:: json
155+
156+
"conntrack_helpers": {
157+
"protocol": "udp",
158+
"port": 69,
159+
"helper": "tftp"
160+
}
161+
162+
Currently it's not possible to add this helper via the OpenStack CLI, to add
163+
this to the Ironic router you will need to make a request to the API directly,
164+
for example via cURL.
165+
166+
.. code-block:: console
167+
168+
curl -g -i -X POST \
169+
http://<internal_api_vip>:9696/v2.0/routers/<ironic_router_uuid>/conntrack_helpers \
170+
-H "Accept: application/json" \
171+
-H "User-Agent: openstacksdk/2.0.0 keystoneauth1/5.4.0 python-requests/2.31.0 CPython/3.9.18" \
172+
-H "X-Auth-Token: <issued_token>" \
173+
-d '{ "conntrack_helper": {"helper": "tftp", "protocol": "udp", "port": 69 } }'
174+
175+
TFTP server
176+
===========
177+
178+
By default the Ironic TFTP server (ironic_pxe container) will call the UEFI
179+
boot file ``ipxe-x86_64.efi`` instead of ``ipxe.efi`` meaning no boot file will
180+
be sent during the PXE boot process in the default configuration.
181+
182+
As of now this is solved by using a hack workaround by changing the boot file
183+
in the ``ironic_pxe`` container. To do this you will need to enter the
184+
container and rename the file manually.
185+
186+
.. code-block:: console
187+
188+
docker exec ironic_pxe “mv /tftpboot/ipxe-x86_64.efi /tftpboot/ipxe.efi”
189+
190+
Baremetal inventory
191+
===================
192+
193+
To begin enrolling nodes you will need to define them in the hosts file.
194+
195+
.. code-block:: ini
196+
197+
[r1]
198+
hv1 ipmi_address=10.1.28.16
199+
hv2 ipmi_address=10.1.28.17
200+
201+
202+
[baremetal-compute:children]
203+
r1
204+
205+
The baremetal nodes will also require some extra variables to be defined
206+
in the group_vars for your rack, these should include the BMC credentials
207+
and the Ironic driver you wish to use.
208+
209+
.. code-block:: yaml
210+
211+
ironic_driver: redfish
212+
213+
ironic_driver_info:
214+
redfish_system_id: "{{ ironic_redfish_system_id }}"
215+
redfish_address: "{{ ironic_redfish_address }}"
216+
redfish_username: "{{ ironic_redfish_username }}"
217+
redfish_password: "{{ ironic_redfish_password }}"
218+
redfish_verify_ca: "{{ ironic_redfish_verify_ca }}"
219+
ipmi_address: "{{ ipmi_address }}"
220+
221+
ironic_properties:
222+
capabilities: "{{ ironic_capabilities }}"
223+
224+
ironic_resource_class: "example_resouce_class"
225+
ironic_redfish_system_id: "/redfish/v1/Systems/System.Embedded.1"
226+
ironic_redfish_verify_ca: "{{ inspector_rule_var_redfish_verify_ca }}"
227+
ironic_redfish_address: "{{ ipmi_address }}"
228+
ironic_redfish_username: "{{ inspector_redfish_username }}"
229+
ironic_redfish_password: "{{ inspector_redfish_password }}"
230+
ironic_capabilities: "boot_option:local,boot_mode:uefi"
231+
232+
The typical layout for baremetal nodes are separated by racks, for instance
233+
in rack 1 we have the following configuration set up where the BMC addresses
234+
are defined for all nodes, and Redfish information such as username, passwords
235+
and the system ID are defined for the rack as a whole.
236+
237+
You can add more racks to the deployment by replicating the rack 1 example and
238+
adding that as an entry to the baremetal-compute group.
239+
240+
Node enrollment
241+
===============
242+
243+
When nodes are defined in the inventory you can begin enrolling them by
244+
invoking the Kayobe commmand
245+
246+
.. code-block:: console
247+
248+
(kayobe) $ kayobe baremetal compute register
249+
250+
Following registration, the baremetal nodes can be inspected and made
251+
available for provisioning by Nova via the Kayobe commands
252+
253+
.. code-block:: console
254+
255+
(kayobe) $ kayobe baremetal compute inspect
256+
(kayobe) $ kayobe baremetal compute provide
257+
258+
Baremetal hypervisors
259+
=====================
260+
261+
To deploy baremetal hypervisor nodes it will be neccessary to split out
262+
the nodes you wish to use as hypervisors and add it to the Kayobe compute
263+
group to ensure the hypervisor is configured as a compute node during
264+
host configure.
265+
266+
.. code-block:: ini
267+
268+
[r1]
269+
hv1 ipmi_address=10.1.28.16
270+
271+
[r1-hyp]
272+
hv2 ipmi_address=10.1.28.17
273+
274+
[r1:children]
275+
r1-hyp
276+
277+
[compute:children]
278+
r1-hyp
279+
280+
[baremetal-compute:children]
281+
r1
282+
283+
The hypervisor nodes will also need to define hypervisor specific variables
284+
such as the image to be used, network to provision on and the availability zone.
285+
These can be defined under group_vars.
286+
287+
.. code-block:: yaml
288+
289+
hypervisor_image: "37825714-27da-48e0-8887-d609349e703b"
290+
key_name: "testing"
291+
availability_zone: "nova"
292+
baremetal_flavor: "baremetal-A"
293+
baremetal_network: "rack-net"
294+
auth:
295+
auth_url: "{{ lookup('env', 'OS_AUTH_URL') }}"
296+
username: "{{ lookup('env', 'OS_USERNAME') }}"
297+
password: "{{ lookup('env', 'OS_PASSWORD') }}"
298+
project_name: "{{ lookup('env', 'OS_PROJECT_NAME') }}"
299+
300+
To begin deploying these nodes as instances you will need to run the Ansible
301+
playbook deploy-baremetal-instance.yml.
302+
303+
.. code-block:: console
304+
305+
(kayobe) $ kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/deploy-baremetal-instance.yml
306+
307+
This playbook will update network allocations with the new baremetal hypervisor
308+
IP addresses, create a Neutron port corresponding to the address and deploy
309+
an image on the baremetal instance.
310+
311+
When the playbook has finished and the rack is successfully imaged, they can be
312+
configured with ``kayobe overcloud host configure`` and kolla compute services
313+
can be deployed with ``kayobe overcloud service deploy``.

etc/kayobe/ansible/deploy-baremetal-instance.yml

Lines changed: 0 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -49,17 +49,6 @@
4949
hosts: compute
5050
gather_facts: false
5151
connection: local
52-
vars:
53-
hypervisor_image: "37825714-27da-48e0-8887-d609349e703b"
54-
key_name: "testing"
55-
availability_zone: "nova"
56-
baremetal_flavor: "baremetal-A"
57-
baremetal_network: "rack-net"
58-
auth:
59-
auth_url: "{{ lookup('env', 'OS_AUTH_URL') }}"
60-
username: "{{ lookup('env', 'OS_USERNAME') }}"
61-
password: "{{ lookup('env', 'OS_PASSWORD') }}"
62-
project_name: "{{ lookup('env', 'OS_PROJECT_NAME') }}"
6352
tasks:
6453
- name: Show baremetal node
6554
ansible.builtin.shell:

0 commit comments

Comments
 (0)