Cephalocon 2024 Demo Materials #184
phlogistonjohn
started this conversation in
Show and tell
Replies: 1 comment
-
|
Pre-recorded Demo Video File - no audio, video is intended to be narrated live. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Background, scripts, and files related to John's presentation for Cephalocon 2024.
NOTE: The demonstration recordings prepared for SDC 2024 and Cephalocon 2024 are similar but not the same. In particular, I felt that Cephalocon attendees are more likely to get hands on with the feature and this session includes some basic background and debugging information not found in the SDC recording. However, most of the cluster setup is the same.
Prerequisites
Ceph cluster can be installed just prior to this procedure.
Windows client may need additional time to join to AD, we used the AD DC from samba-container project with a default configuration.
Copy the contents of
cluster2.yamlto the ceph admin node. Content below.Setup Phase
SMB Module and filesystem
ceph mgr module enable smbceph fs volume create cephfsceph fs subvolumegroup create cephfs demosfor sub in uv1 uv2 domv1 domv2 ; do ceph fs subvolume create cephfs --group-name=demos --sub-name=$sub --mode=0777 ; doneHost labeling
ceph orch host label add ceph0 firstWe label the host nodes this way to make it easy to determine where the smb services will be placed later on. This is not always going to be necessary for general use.
First Cluster
a declarative set of resource descriptions
has used Ceph’s NFS module before
Cephadm orchestration
ceph smb cluster ls- there should be no clusters yetceph smb cluster create starter user --define-user-pass=test%D3m0123 --define-user-pass=test2%0th3rD3m0 --placement=’1 label:firstceph smb share create starter share1 --cephfs-volume=cephfs --subvolume=demos/uv1 --path=/ceph smb share create starter share2 --cephfs-volume=cephfs --subvolume=demos/uv2 --path=/ --share-name='Share Two'ceph smb showceph smb show ceph.smb.shareor just one specific clusterceph smb show ceph.smb.cluster.starter --format=yamlceph orch lsto show a running services, including a new smb serviceQuick CLI Test
We can do a quick test using Samba's
smbclienttool. It's fast to use but you can skip this step or use a windows client if desired.smbclient -U ‘test%D3m0123’ ∕∕192.168.76.200∕share1smbclient -U ‘test2%0th3rD3m0’ ∕∕192.168.76.200∕share1smbclient -U ‘test2%0th3rD3m0’ ∕∕192.168.76.200∕"Share Two"Second Cluster
This cluster will be installed using the "declarative method". We define everything we need to set up our cluster: cluster settings, shares, and domain join authentication info in a single yaml file we'll call
cluster2.yaml:This YAML file defines a cluster that will use both Active Directory and Clustering. Samba's clustering system, CTDB, will manage the supplied public IP Addresses.
ceph smb apply -i - < cluster2.yamlceph orch lsceph smb show ceph.smb.cluster --format=yamlceph smb show ceph.smb.cluster.ccad --format=yamlceph orch lsshould show3/3daemons running and we're ready to test client access.Windows Client
From the windows client, log into
\\192.168.76.50\Cluster Share One, where the IP address is any of the public IP addresses from the YAML spec. We can read and write files to the shares.Log into
\\192.168.76.200\share1(using the IP of the first cluster node). This should prompt for login. Provide the settings of one of the deinfed users with clusterstarter.Taking a Deeper Look
SMB on Ceph makes use of multiple systemd services running on a single node.
Run
systemctl list-units | grep smbto see some of them on a node that is running an smb service (seeceph orch psfor more info).Each service can be examined indivudally using
systemctl status <unit-name>. This includes the service for the init containers and the services for the sidecar containers.When the smb service is deployed using additional features, such as Active Directory support or true smbd level clustering, additional sidecar systemd services will be deployed. For example, run
systemctl list-units | grep smb | grep winbindto narrow down the list of services to show the winbdind sidecar. Runsystemctl status <winbind-sidcar-name>to view details relating to the service.Each service will emit logging to the systemd jounal. Run commands such as
journalctl -u <service_name>to view journal logging for that service. Similarly, while the container is running the logs can be fetched directly from the container engine. Runpodman logs <container_name>to view logs for the container directly.There are some helpful commands that one can run within a Samba container running as part of the smb service. Enter the container by running
podman exec -it <container_name> bash. Runnet conf listto see a view of Samba's configuration. Attempt test connections using smbclient in the container, for example:smbclient -U 'domain1\bwayne%1115Rose.' -L //localhostto list shares. Runsmbclient -U 'domain1\bwayne%1115Rose.' //localhost/'Useful Filesto connect to one of the shares we defined earlier.Finally, we touch on one important aspect of how the smb module interacts with the orchestration layer. The smb module writes configuration objects into a RADOS pool named
.smb. When debugging it can be useful to examine these objects directly. We can do this by first entering a cephadm shell - runcephadm shellThen use rados commands such as:rados lspoolsrados ls --pool=.smb --allrados get --pool=.smb --namespace=<cluster_id> config.smb /dev/stdoutradod get --pool=.smb --namespace=<cluster_id> spec.smb /dev/stdoutThese objects store JSON configuration that may be compact and can be expanded into a more human readable form using
jqorpython3 -m json.tool.Deleting a Cluster
ceph smb cluster lsto list clustersceph smb share ls starterto list shares belonging to the starter clusterceph smb share rm starter share1to delete the first shareceph smb share rm starter share2to delete the second shareceph orch lsto show the two clusters are still runningceph smb cluster rm starterto remove the clusterceph orch lsto show that the first cluster has been deletedBeta Was this translation helpful? Give feedback.
All reactions