Skip to content

WIP Deprecated: Template service broker (local docker, UI team specific)

Ben Petersen edited this page Mar 24, 2017 · 1 revision

UPDATE: preserving this for the sake of the notes, but @spadgett has an updated (cleaner) version that gets the Broker + Catalog running & communicating. Visit it here.

Running with the Open Service Broker API

Contacts:

  • Jim Minter
  • #jmintre irc
  • jim-minter github
  • also Paul Morie, Ben Parees for service catalog

We are going to be building origin from source, using Jim Minter's template service broker branch. UPDATE: as of March 22, 2017, Jim's broker branch has merged to master. You no longer have to pull his branch. However, you will still want to follow the following instructions and set flags appropriately.

These instructions use Docker for mac installed using v1.13.1, not Vagrant. There are instructions for getting a Docker based environment up and running here.

First, we need to build the appropriate release binaries and images for the server:

  # Might need the version 1.8
  $ export OS_BUILD_ENV_GOLANG=1.8
  # This is very helpful, little more output
  $ export OS_DEBUG=true
  # WARN=1 is required for Mac on `make release` for now, see:
  # https://github.com/openshift/origin/issues/13464
  $ WARN=1 make release   # this take long time....  go get coffee. and maybe a bagel.  a nice bagel.

We also need to build clients for our Mac environment. Your oc needs to know about things like templateinstances. If you think the oc you already have (perhaps form homebrew) is current, you may be able to skip this.

  $ OS_BUILD_ENV_PRESERVE=_output/local/bin/darwin/ hack/env OS_BUILD_PLATFORMS=darwin/amd64 make build WHAT=cmd/oc

Then edit your .bash_profile to update your path to this new binary:

# check if you already have oc, you will want to be sure you know which oc you are using.
# if you have an oc already, you should remove/unlink it, such as with `brew unlink openshift-cli` 
# (depending on how it was installed)
$ which oc
# put this in your .bash_profile 
# after the `make build WHAT=cmd/oc` above, the `oc` binary will be in this directory (on mac):
# if you were previously using another oc, be sure this works!
# for example, if you used oc via homebrew, run:  `brew unlink openshift-cli` (`brew link openshift-cli` later to repair)
PATH=$PATH:/Users/bpeterse/go/src/github.com/openshift/origin/_output/local/bin/darwin/amd64  

Next, we will oc cluster up and enable the template service broker in the MasterConfig:

  $ cd /path/to/openshift/origin
  # per skuznets, we won't use --version=latest here.
  # make release creates images tagged `:latest` and with the SHA of the last commit
  # you may not want the PullAlways behavior from `:latest`, so instruct `cluster up`
  # to use the images tagged with the last SHA:
  $ oc cluster up --version="$(git log -1 --pretty=%h )" --host-config-dir=$HOME/openshift.local.config --host-data-dir=$HOME/openshift.local.etcd --host-pv-dir=$HOME/openshift.local.volumes
  # now, cluster down so we can update the config:
  $ oc cluster down
  #
  # Now, edit the master-config.yml.
  # update enableTemplateServiceBroker: true (approx line 75)
  $ vim $HOME/openshift.local.config/master/master-config.yaml
  #
  # restart the cluster from your host, using the new config 
  # if you want to start clean:
  $ oc cluster up --version="$(git log -1 --pretty=%h )" --use-existing-config --host-config-dir=$HOME/openshift.local.config 
  # if you want to preserve your data between cluster down / cluster up:
  $ oc cluster up --version="$(git log -1 --pretty=%h )" --use-existing-config --host-config-dir=$HOME/openshift.local.config --host-data-dir=$HOME/openshift.local.etcd --host-pv-dir=$HOME/openshift.local.volumes

now, reconcile cluster roles:

  # `oc adm` is an alias to `oadm`.
  # we did not build `cmd/oadm` when we build `cmd/oc`
  # we could also docker exec openshift oadm policy reconcile-cluster-roles 
  $ oc login -u system:admin
  $ oc adm policy reconcile-cluster-roles
  • then you will need the project demo to use the rest of the scripts provided:
$ oc login -u developer -p developer
$ oc new-project demo

Now use the yaml file in the test-scripts dir to create resources:

# to do the following, you will have to be system:admin
$ oc login -u system:admin
# there are a few scripts in pkg/template/servicebroker/test-scripts
$ oc create -f pkg/template/servicebroker/test-scripts/clusterrole.yaml 
# and then you should create the following:
$ oc create -f examples/sample-app/application-template-stibuild.json -n openshift
# now return to whatever user you were previously using (developer:developer)

Now, you can attempt to run (from the pkg/template/servicebroker/test-scripts directory, these are not written to run from other dirs):

# do not run these as system:admin, use a non-admin user
$ oc login -u developer -p developer
# then make requesterUsername available to the following scripts
export requesterUsername=<your-non-admin-user>
$ cd pkg/template/servicebroker/test-scripts
$ ./catalog.sh
$ ./provision.sh

Now, see if you can get template instances. By running provision, you provision/instantiate (you choose the word) one of those services into a service instance with the template service broker, the service instance is represented internally by two objects:

  1. the templateinstance
  2. what is currently called a templatebrokeruuidmap the existence of these two things confirms that an instantiation has occured and when you run deprovision, they are removed.

Take a look at these objects. NOTE: you MUST use the built version of oc, if you are using a different version of oc it will not recognize the templateinstances resource & will complain!

# NOTE: as stated above, if you get:
#   the server doesn't have a resource type "templateinstances"
# You are not using the version of oc compiled with this branch.  You'll have to build it!
oc get templateinstances 
# look for status.conditions.status = "True" and status.conditions.type = Ready
# then you will know the provision worked.  if the condition is instantiatefailed, there should be an error message
oc get templateinstances -o yaml


oc get brokertemplateinstances -o yaml

TIP: Its not ideal, but if you ran the above make build WHAT=cmd/oc, you can create an ocDev alias in your .bash_profile and point at the built version with something like alias ocDev='/Users/bpeterse/go/src/github.com/openshift/origin/_output/local/bin/darwin/amd64/oc'. That way, you can keep your oc from homebrew as your default for commands like oc cluster up, but still have this handy for oc get templateinstances. This should go away.

Setting up the catalog

Once we have the Broker, we can setup the catalog & get them to talk to each other.

Deploy the catalog with the template in this gist, you can create the file & use oc create -f or you can paste it into the web console.

oc create -f path/to/the/service-catalog.yaml

Check the Monitoring page at dev-console/project/<project-name>/browse/events to see if things run as expected.

::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

WIP NOTES (to clean up and migrate into instructions above)

::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

Deploying the catalog via template

This gist file is necessary.

Update the above reconcile commands

Simplify some of the instructions above by removing docker exec, we don't actually need to exec into the origin container.

after logging in as cluster admin:

  oc login -u system:admin

replace:

  docker exec origin oadm policy reconcile-cluster-roles

with:

  oc adm policy reconcile-cluster-roles

Metrics

leave metrics off, currently bilding with Jim's branch will not run metrics, as we look for a matching tag on the metrics deployer image w/your running origin. Since we are building origin off an arbitrary tag based on the current state, there will be no match:

Example of the error:

  # origin is at tag 643678c
  # there is no metrics-deployer with this tag. 
  Back-off pulling image "openshift/origin-metrics-deployer:643678c"

Talking to the Broker with the Catalog

Create a broker resource with the url for your running template broker.

  • use this file
  • not sure if authSecret is needed

Updates to cluster up

We can update cluster up to write the config to a dir on the host as well as define a pvc dir. This is ideal for us:

oc cluster up --version="$(git log -1 --pretty=%h )" --use-existing-config --host-config-dir=$HOME/openshift.local.config --host-data-dir=$HOME/openshift.local.etcd --host-pv-dir=$HOME/openshift.local.volumes

Additions from Sam to get the broker & service catalog talking

Hey so I have it working. At first, I tried to assign a service account to the controller-manager pod and give it cluster role templateservicebroker-client. That didn't work. So I ended up doing this:

oc adm policy add-cluster-role-to-group templateservicebroker-client system:unauthenticated

which is horrible, but effective. The catalog is talking to the template broker. I haven't provisioned anything, but I see the service classes loaded in the log.

Jessica, Ben, FYI. This is the broker I added:

https://gist.github.com/spadgett/5706a3bf284a24775c4b4f35f91b1dcc