diff --git a/.editorconfig b/.editorconfig index 0679d88a..e65ee453 100644 --- a/.editorconfig +++ b/.editorconfig @@ -1,4 +1,4 @@ -# EditorConfig is awesome: http://EditorConfig.org +# EditorConfig is awesome: https://EditorConfig.org # top-most EditorConfig file root = true diff --git a/.gitignore b/.gitignore index 4532cf09..9ebebb84 100644 --- a/.gitignore +++ b/.gitignore @@ -8,7 +8,7 @@ *.war *.ear -# virtual machine crash logs, see http://www.java.com/en/download/help/error_hotspot.xml +# virtual machine crash logs, see https://www.java.com/en/download/help/error_hotspot.xml hs_err_pid* .gradle diff --git a/README.adoc b/README.adoc index fbb6ed5f..db4f5df0 100644 --- a/README.adoc +++ b/README.adoc @@ -210,11 +210,11 @@ microservices. That way the testing setup looks like this: image::{intro-root-docs}/stubbed_dependencies.png[title="We're testing microservices in isolation"] Such an approach to testing and deployment gives the following benefits -(thanks to the usage of http://cloud.spring.io/spring-cloud-contract/spring-cloud-contract.html[Spring Cloud Contract]): +(thanks to the usage of https://cloud.spring.io/spring-cloud-contract/spring-cloud-contract.html[Spring Cloud Contract]): - No need to deploy dependant services - The stubs used for the tests ran on a deployed microservice are the same as those used during integration tests -- Those stubs have been tested against the application that produces them (check http://cloud.spring.io/spring-cloud-contract/spring-cloud-contract.html[Spring Cloud Contract] for more information) +- Those stubs have been tested against the application that produces them (check https://cloud.spring.io/spring-cloud-contract/spring-cloud-contract.html[Spring Cloud Contract] for more information) - We don't have many slow tests running on a deployed application - thus the pipeline gets executed much faster - We don't have to queue deployments - we're testing in isolation thus pipelines don't interfere with each other - We don't have to spawn virtual machines each time for deployment purposes @@ -756,7 +756,7 @@ Below you can see what environment variables are required by the scripts. To the |PAAS_STAGE_SPACE | Name of the space for the stage env | pcfdev-space |PAAS_PROD_ORG | Name of the org for the prod env | pcfdev-org |PAAS_PROD_SPACE | Name of the space for the prod env | pcfdev-space -|REPO_WITH_BINARIES | URL to repo with the deployed jars | http://192.168.99.100:8081/artifactory/libs-release-local +|REPO_WITH_BINARIES | URL to repo with the deployed jars | https://192.168.99.100:8081/artifactory/libs-release-local |M2_SETTINGS_REPO_ID | The id of server from Maven settings.xml | artifactory-local |PAAS_HOSTNAME_UUID | Additional suffix for the route. In a shared environment the default routes can be already taken | |APP_MEMORY_LIMIT | How much memory should be used by the infra apps (Eureka, Stub Runner etc.) | 256m @@ -1273,7 +1273,7 @@ executing `tools/deploy-infra.sh`. Example for deploying to Artifactory at IP `1 ---- git clone https://github.com/spring-cloud/spring-cloud-pipelines cd spring-cloud-pipelines/ -ARTIFACTORY_URL="http://192.168.99.100:8081/artifactory/libs-release-local" ./tools/deploy-infra.sh +ARTIFACTORY_URL="https://192.168.99.100:8081/artifactory/libs-release-local" ./tools/deploy-infra.sh ---- [[setup-settings-xml]] @@ -1656,7 +1656,7 @@ You can also use the https://jenkins.io/doc/book/pipeline/syntax/[declarative pi https://jenkins.io/projects/blueocean/[Blue Ocean UI]. Here is a step by step guide to run a pipeline via this approach. -The Blue Ocean UI is available under the `blue/` URL. E.g. for Docker Machine based setup `http://192.168.99.100:8080/blue`. +The Blue Ocean UI is available under the `blue/` URL. E.g. for Docker Machine based setup `https://192.168.99.100:8080/blue`. {nbsp} {nbsp} @@ -1703,7 +1703,7 @@ check out this https://issues.jenkins-ci.org/browse/JENKINS-33846[issue] for mor WARNING: Currently there is no way to introduce manual steps in a performant way. Jenkins is blocking an executor when manual step is required. That means that you'll run out of executors pretty fast. You can check out this https://issues.jenkins-ci.org/browse/JENKINS-36235[issue] for -and this http://stackoverflow.com/questions/42561241/how-to-wait-for-user-input-in-a-declarative-pipeline-without-blocking-a-heavywei[StackOverflow question] +and this https://stackoverflow.com/questions/42561241/how-to-wait-for-user-input-in-a-declarative-pipeline-without-blocking-a-heavywei[StackOverflow question] for more information. [[optional-steps-cf]] @@ -1947,7 +1947,7 @@ You can also use the https://jenkins.io/doc/book/pipeline/syntax/[declarative pi https://jenkins.io/projects/blueocean/[Blue Ocean UI]. Here is a step by step guide to run a pipeline via this approach. -The Blue Ocean UI is available under the `blue/` URL. E.g. for Docker Machine based setup `http://192.168.99.100:8080/blue`. +The Blue Ocean UI is available under the `blue/` URL. E.g. for Docker Machine based setup `https://192.168.99.100:8080/blue`. {nbsp} {nbsp} @@ -1994,7 +1994,7 @@ check out this https://issues.jenkins-ci.org/browse/JENKINS-33846[issue] for mor WARNING: Currently there is no way to introduce manual steps in a performant way. Jenkins is blocking an executor when manual step is required. That means that you'll run out of executors pretty fast. You can check out this https://issues.jenkins-ci.org/browse/JENKINS-36235[issue] for -and this http://stackoverflow.com/questions/42561241/how-to-wait-for-user-input-in-a-declarative-pipeline-without-blocking-a-heavywei[StackOverflow question] +and this https://stackoverflow.com/questions/42561241/how-to-wait-for-user-input-in-a-declarative-pipeline-without-blocking-a-heavywei[StackOverflow question] for more information. [[optional-steps-k8s]] @@ -2875,7 +2875,7 @@ alertmanager: ## alertmanager data Persistent Volume access modes ## Must match those of existing PV or dynamic provisioner - ## Ref: http://kubernetes.io/docs/user-guide/persistent-volumes/ + ## Ref: https://kubernetes.io/docs/user-guide/persistent-volumes/ ## accessModes: - ReadWriteOnce @@ -2918,7 +2918,7 @@ alertmanager: replicaCount: 1 ## alertmanager resource requests and limits - ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/ + ## Ref: https://kubernetes.io/docs/user-guide/compute-resources/ ## resources: {} # limits: @@ -2960,7 +2960,7 @@ configmapReload: pullPolicy: IfNotPresent ## configmap-reload resource requests and limits - ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/ + ## Ref: https://kubernetes.io/docs/user-guide/compute-resources/ ## resources: {} @@ -2995,7 +2995,7 @@ kubeStateMetrics: replicaCount: 1 ## kube-state-metrics resource requests and limits - ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/ + ## Ref: https://kubernetes.io/docs/user-guide/compute-resources/ ## resources: {} # limits: @@ -3192,7 +3192,7 @@ server: ## Prometheus server data Persistent Volume access modes ## Must match those of existing PV or dynamic provisioner - ## Ref: http://kubernetes.io/docs/user-guide/persistent-volumes/ + ## Ref: https://kubernetes.io/docs/user-guide/persistent-volumes/ ## accessModes: - ReadWriteOnce @@ -3236,7 +3236,7 @@ server: replicaCount: 1 ## Prometheus server resource requests and limits - ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/ + ## Ref: https://kubernetes.io/docs/user-guide/compute-resources/ ## resources: {} # limits: @@ -3326,7 +3326,7 @@ pushgateway: replicaCount: 1 ## pushgateway resource requests and limits - ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/ + ## Ref: https://kubernetes.io/docs/user-guide/compute-resources/ ## resources: {} # limits: @@ -3646,7 +3646,7 @@ NOTES: ---- Perform the aforementioned steps and add the Grafana's datasource -as Prometheus with URL `http://sc-pipelines-prometheus-prometheus-server.default.svc.cluster.local` +as Prometheus with URL `https://sc-pipelines-prometheus-prometheus-server.default.svc.cluster.local` You can pick the dashboard via the Grafana ID (2471). This is the default dashboard for the Spring Cloud Pipelines demo apps. @@ -3678,7 +3678,7 @@ threshold. === Prerequisites -As prerequisites you need to have http://www.shellcheck.net/[shellcheck], +As prerequisites you need to have https://www.shellcheck.net/[shellcheck], https://github.com/sstephenson/bats[bats], https://stedolan.github.io/jq/[jq] and https://rubyinstaller.org/downloads/[ruby] installed. If you're on a Linux machine then `bats` and `shellcheck` will be installed for you. diff --git a/buildSrc/src/test/resources/project_customizer/docs-sources/src/main/asciidoc/CONCOURSE.adoc b/buildSrc/src/test/resources/project_customizer/docs-sources/src/main/asciidoc/CONCOURSE.adoc index 30a983d9..e719b2ea 100644 --- a/buildSrc/src/test/resources/project_customizer/docs-sources/src/main/asciidoc/CONCOURSE.adoc +++ b/buildSrc/src/test/resources/project_customizer/docs-sources/src/main/asciidoc/CONCOURSE.adoc @@ -171,7 +171,7 @@ Below you can see what environment variables are required by the scripts. To the |PAAS_STAGE_SPACE | Name of the space for the stage env | pcfdev-space |PAAS_PROD_ORG | Name of the org for the prod env | pcfdev-org |PAAS_PROD_SPACE | Name of the space for the prod env | pcfdev-space -|REPO_WITH_BINARIES | URL to repo with the deployed jars | http://192.168.99.100:8081/artifactory/libs-release-local +|REPO_WITH_BINARIES | URL to repo with the deployed jars | https://192.168.99.100:8081/artifactory/libs-release-local |M2_SETTINGS_REPO_ID | The id of server from Maven settings.xml | artifactory-local |PAAS_HOSTNAME_UUID | Additional suffix for the route. In a shared environment the default routes can be already taken | |APP_MEMORY_LIMIT | How much memory should be used by the infra apps (Eureka, Stub Runner etc.) | 256m diff --git a/buildSrc/src/test/resources/project_customizer/docs-sources/src/main/asciidoc/INTRO.adoc b/buildSrc/src/test/resources/project_customizer/docs-sources/src/main/asciidoc/INTRO.adoc index 28bb004c..718f912e 100644 --- a/buildSrc/src/test/resources/project_customizer/docs-sources/src/main/asciidoc/INTRO.adoc +++ b/buildSrc/src/test/resources/project_customizer/docs-sources/src/main/asciidoc/INTRO.adoc @@ -160,11 +160,11 @@ microservices. That way the testing setup looks like this: image::{intro-root-docs}/stubbed_dependencies.png[title="We're testing microservices in isolation"] Such an approach to testing and deployment gives the following benefits -(thanks to the usage of http://cloud.spring.io/spring-cloud-contract/spring-cloud-contract.html[Spring Cloud Contract]): +(thanks to the usage of https://cloud.spring.io/spring-cloud-contract/spring-cloud-contract.html[Spring Cloud Contract]): - No need to deploy dependant services - The stubs used for the tests ran on a deployed microservice are the same as those used during integration tests -- Those stubs have been tested against the application that produces them (check http://cloud.spring.io/spring-cloud-contract/spring-cloud-contract.html[Spring Cloud Contract] for more information) +- Those stubs have been tested against the application that produces them (check https://cloud.spring.io/spring-cloud-contract/spring-cloud-contract.html[Spring Cloud Contract] for more information) - We don't have many slow tests running on a deployed application - thus the pipeline gets executed much faster - We don't have to queue deployments - we're testing in isolation thus pipelines don't interfere with each other - We don't have to spawn virtual machines each time for deployment purposes diff --git a/buildSrc/src/test/resources/project_customizer/docs-sources/src/main/asciidoc/JENKINS_BLUE_OCEAN.adoc b/buildSrc/src/test/resources/project_customizer/docs-sources/src/main/asciidoc/JENKINS_BLUE_OCEAN.adoc index f33500be..72426533 100644 --- a/buildSrc/src/test/resources/project_customizer/docs-sources/src/main/asciidoc/JENKINS_BLUE_OCEAN.adoc +++ b/buildSrc/src/test/resources/project_customizer/docs-sources/src/main/asciidoc/JENKINS_BLUE_OCEAN.adoc @@ -2,7 +2,7 @@ You can also use the https://jenkins.io/doc/book/pipeline/syntax/[declarative pi https://jenkins.io/projects/blueocean/[Blue Ocean UI]. Here is a step by step guide to run a pipeline via this approach. -The Blue Ocean UI is available under the `blue/` URL. E.g. for Docker Machine based setup `http://192.168.99.100:8080/blue`. +The Blue Ocean UI is available under the `blue/` URL. E.g. for Docker Machine based setup `https://192.168.99.100:8080/blue`. {nbsp} {nbsp} @@ -49,5 +49,5 @@ check out this https://issues.jenkins-ci.org/browse/JENKINS-33846[issue] for mor WARNING: Currently there is no way to introduce manual steps in a performant way. Jenkins is blocking an executor when manual step is required. That means that you'll run out of executors pretty fast. You can check out this https://issues.jenkins-ci.org/browse/JENKINS-36235[issue] for -and this http://stackoverflow.com/questions/42561241/how-to-wait-for-user-input-in-a-declarative-pipeline-without-blocking-a-heavywei[StackOverflow question] +and this https://stackoverflow.com/questions/42561241/how-to-wait-for-user-input-in-a-declarative-pipeline-without-blocking-a-heavywei[StackOverflow question] for more information. diff --git a/buildSrc/src/test/resources/project_customizer/docs-sources/src/main/asciidoc/JENKINS_COMMON.adoc b/buildSrc/src/test/resources/project_customizer/docs-sources/src/main/asciidoc/JENKINS_COMMON.adoc index 8d49bb8b..b52b8b63 100644 --- a/buildSrc/src/test/resources/project_customizer/docs-sources/src/main/asciidoc/JENKINS_COMMON.adoc +++ b/buildSrc/src/test/resources/project_customizer/docs-sources/src/main/asciidoc/JENKINS_COMMON.adoc @@ -60,7 +60,7 @@ executing `tools/deploy-infra.sh`. Example for deploying to Artifactory at IP `1 ---- git clone https://github.com/spring-cloud/spring-cloud-pipelines cd spring-cloud-pipelines/ -ARTIFACTORY_URL="http://192.168.99.100:8081/artifactory/libs-release-local" ./tools/deploy-infra.sh +ARTIFACTORY_URL="https://192.168.99.100:8081/artifactory/libs-release-local" ./tools/deploy-infra.sh ---- [[setup-settings-xml]] diff --git a/buildSrc/src/test/resources/project_customizer/docs-sources/src/main/asciidoc/TECH.adoc b/buildSrc/src/test/resources/project_customizer/docs-sources/src/main/asciidoc/TECH.adoc index 2bbabd78..abf0cce8 100644 --- a/buildSrc/src/test/resources/project_customizer/docs-sources/src/main/asciidoc/TECH.adoc +++ b/buildSrc/src/test/resources/project_customizer/docs-sources/src/main/asciidoc/TECH.adoc @@ -2,7 +2,7 @@ === Prerequisites -As prerequisites you need to have http://www.shellcheck.net/[shellcheck], +As prerequisites you need to have https://www.shellcheck.net/[shellcheck], https://github.com/sstephenson/bats[bats], https://stedolan.github.io/jq/[jq] and https://rubyinstaller.org/downloads/[ruby] installed. If you're on a Linux machine then `bats` and `shellcheck` will be installed for you. diff --git a/common/src/test/bats/pipeline-cf.bats b/common/src/test/bats/pipeline-cf.bats index 1f4d7582..3556cea0 100644 --- a/common/src/test/bats/pipeline-cf.bats +++ b/common/src/test/bats/pipeline-cf.bats @@ -222,7 +222,7 @@ export -f mockGradlew assert_output --partial "cf set-env eureka-github-webhook APPLICATION_DOMAIN eureka-github-webhook-sc-pipelines.demo.io" assert_output --partial "cf set-env eureka-github-webhook JAVA_OPTS -Djava.security.egd=file:///dev/urandom" assert_output --partial "cf restart eureka-github-webhook" - assert_output --partial 'cf create-user-provided-service eureka-github-webhook -p {"uri":"http://eureka-github-webhook-sc-pipelines.demo.io"}' + assert_output --partial 'cf create-user-provided-service eureka-github-webhook -p {"uri":"https://eureka-github-webhook-sc-pipelines.demo.io"}' # Stub Runner assert_output --partial "cf delete -f stubrunner-github-webhook" assert_output --partial "cf delete-service -f stubrunner-github-webhook" @@ -309,7 +309,7 @@ export -f mockGradlew assert_output --partial "cf set-env eureka-github-webhook APPLICATION_DOMAIN eureka-github-webhook-sc-pipelines.demo.io" assert_output --partial "cf set-env eureka-github-webhook JAVA_OPTS -Djava.security.egd=file:///dev/urandom" assert_output --partial "cf restart eureka-github-webhook" - assert_output --partial 'cf create-user-provided-service eureka-github-webhook -p {"uri":"http://eureka-github-webhook-sc-pipelines.demo.io"}' + assert_output --partial 'cf create-user-provided-service eureka-github-webhook -p {"uri":"https://eureka-github-webhook-sc-pipelines.demo.io"}' # Stub Runner assert_output --partial "cf delete -f stubrunner-github-webhook" assert_output --partial "cf delete-service -f stubrunner-github-webhook" @@ -564,7 +564,7 @@ export -f mockGradlew assert_output --partial "cf set-env github-eureka APPLICATION_DOMAIN github-eureka-sc-pipelines.demo.io" assert_output --partial "cf set-env github-eureka JAVA_OPTS -Djava.security.egd=file:///dev/urandom" assert_output --partial "cf restart github-eureka" - assert_output --partial 'cf create-user-provided-service github-eureka -p {"uri":"http://github-eureka-sc-pipelines.demo.io"}' + assert_output --partial 'cf create-user-provided-service github-eureka -p {"uri":"https://github-eureka-sc-pipelines.demo.io"}' # App refute_output --partial "cf delete -f my-project" assert_output --partial "cf push my-project" @@ -640,7 +640,7 @@ export -f mockGradlew assert_output --partial "cf set-env github-eureka APPLICATION_DOMAIN github-eureka-sc-pipelines.demo.io" assert_output --partial "cf set-env github-eureka JAVA_OPTS -Djava.security.egd=file:///dev/urandom" assert_output --partial "cf restart github-eureka" - assert_output --partial 'cf create-user-provided-service github-eureka -p {"uri":"http://github-eureka-sc-pipelines.demo.io"}' + assert_output --partial 'cf create-user-provided-service github-eureka -p {"uri":"https://github-eureka-sc-pipelines.demo.io"}' # App refute_output --partial "cf delete -f ${projectName}" assert_output --partial "cf push ${projectName}" diff --git a/concourse/.gitignore b/concourse/.gitignore index ee6f0ebd..a60fcc83 100644 --- a/concourse/.gitignore +++ b/concourse/.gitignore @@ -8,7 +8,7 @@ *.war *.ear -# virtual machine crash logs, see http://www.java.com/en/download/help/error_hotspot.xml +# virtual machine crash logs, see https://www.java.com/en/download/help/error_hotspot.xml hs_err_pid* credentials.yml diff --git a/concourse/README.adoc b/concourse/README.adoc index 01ce90fb..346e8057 100644 --- a/concourse/README.adoc +++ b/concourse/README.adoc @@ -182,7 +182,7 @@ Below you can see what environment variables are required by the scripts. To the |PAAS_STAGE_SPACE | Name of the space for the stage env | pcfdev-space |PAAS_PROD_ORG | Name of the org for the prod env | pcfdev-org |PAAS_PROD_SPACE | Name of the space for the prod env | pcfdev-space -|REPO_WITH_BINARIES | URL to repo with the deployed jars | http://192.168.99.100:8081/artifactory/libs-release-local +|REPO_WITH_BINARIES | URL to repo with the deployed jars | https://192.168.99.100:8081/artifactory/libs-release-local |M2_SETTINGS_REPO_ID | The id of server from Maven settings.xml | artifactory-local |PAAS_HOSTNAME_UUID | Additional suffix for the route. In a shared environment the default routes can be already taken | |APP_MEMORY_LIMIT | How much memory should be used by the infra apps (Eureka, Stub Runner etc.) | 256m diff --git a/concourse/credentials-sample-cf.yml b/concourse/credentials-sample-cf.yml index 9a38420b..deda3a95 100644 --- a/concourse/credentials-sample-cf.yml +++ b/concourse/credentials-sample-cf.yml @@ -68,4 +68,4 @@ m2-settings-repo-id: artifactory-local m2-settings-repo-username: admin m2-settings-repo-password: password -repo-with-binaries: http://192.168.99.100:8081/artifactory/libs-release-local +repo-with-binaries: https://192.168.99.100:8081/artifactory/libs-release-local diff --git a/docs-sources/src/main/asciidoc/CF_CONCOURSE.adoc b/docs-sources/src/main/asciidoc/CF_CONCOURSE.adoc index 77b425c9..e4242f4c 100644 --- a/docs-sources/src/main/asciidoc/CF_CONCOURSE.adoc +++ b/docs-sources/src/main/asciidoc/CF_CONCOURSE.adoc @@ -176,7 +176,7 @@ Below you can see what environment variables are required by the scripts. To the |PAAS_STAGE_SPACE | Name of the space for the stage env | pcfdev-space |PAAS_PROD_ORG | Name of the org for the prod env | pcfdev-org |PAAS_PROD_SPACE | Name of the space for the prod env | pcfdev-space -|REPO_WITH_BINARIES | URL to repo with the deployed jars | http://192.168.99.100:8081/artifactory/libs-release-local +|REPO_WITH_BINARIES | URL to repo with the deployed jars | https://192.168.99.100:8081/artifactory/libs-release-local |M2_SETTINGS_REPO_ID | The id of server from Maven settings.xml | artifactory-local |PAAS_HOSTNAME_UUID | Additional suffix for the route. In a shared environment the default routes can be already taken | |APP_MEMORY_LIMIT | How much memory should be used by the infra apps (Eureka, Stub Runner etc.) | 256m diff --git a/docs-sources/src/main/asciidoc/INTRO.adoc b/docs-sources/src/main/asciidoc/INTRO.adoc index a33aea8d..08efbfb0 100644 --- a/docs-sources/src/main/asciidoc/INTRO.adoc +++ b/docs-sources/src/main/asciidoc/INTRO.adoc @@ -182,11 +182,11 @@ microservices. That way the testing setup looks like this: image::{intro-root-docs}/stubbed_dependencies.png[title="We're testing microservices in isolation"] Such an approach to testing and deployment gives the following benefits -(thanks to the usage of http://cloud.spring.io/spring-cloud-contract/spring-cloud-contract.html[Spring Cloud Contract]): +(thanks to the usage of https://cloud.spring.io/spring-cloud-contract/spring-cloud-contract.html[Spring Cloud Contract]): - No need to deploy dependant services - The stubs used for the tests ran on a deployed microservice are the same as those used during integration tests -- Those stubs have been tested against the application that produces them (check http://cloud.spring.io/spring-cloud-contract/spring-cloud-contract.html[Spring Cloud Contract] for more information) +- Those stubs have been tested against the application that produces them (check https://cloud.spring.io/spring-cloud-contract/spring-cloud-contract.html[Spring Cloud Contract] for more information) - We don't have many slow tests running on a deployed application - thus the pipeline gets executed much faster - We don't have to queue deployments - we're testing in isolation thus pipelines don't interfere with each other - We don't have to spawn virtual machines each time for deployment purposes diff --git a/docs-sources/src/main/asciidoc/JENKINS_BLUE_OCEAN.adoc b/docs-sources/src/main/asciidoc/JENKINS_BLUE_OCEAN.adoc index f33500be..72426533 100644 --- a/docs-sources/src/main/asciidoc/JENKINS_BLUE_OCEAN.adoc +++ b/docs-sources/src/main/asciidoc/JENKINS_BLUE_OCEAN.adoc @@ -2,7 +2,7 @@ You can also use the https://jenkins.io/doc/book/pipeline/syntax/[declarative pi https://jenkins.io/projects/blueocean/[Blue Ocean UI]. Here is a step by step guide to run a pipeline via this approach. -The Blue Ocean UI is available under the `blue/` URL. E.g. for Docker Machine based setup `http://192.168.99.100:8080/blue`. +The Blue Ocean UI is available under the `blue/` URL. E.g. for Docker Machine based setup `https://192.168.99.100:8080/blue`. {nbsp} {nbsp} @@ -49,5 +49,5 @@ check out this https://issues.jenkins-ci.org/browse/JENKINS-33846[issue] for mor WARNING: Currently there is no way to introduce manual steps in a performant way. Jenkins is blocking an executor when manual step is required. That means that you'll run out of executors pretty fast. You can check out this https://issues.jenkins-ci.org/browse/JENKINS-36235[issue] for -and this http://stackoverflow.com/questions/42561241/how-to-wait-for-user-input-in-a-declarative-pipeline-without-blocking-a-heavywei[StackOverflow question] +and this https://stackoverflow.com/questions/42561241/how-to-wait-for-user-input-in-a-declarative-pipeline-without-blocking-a-heavywei[StackOverflow question] for more information. diff --git a/docs-sources/src/main/asciidoc/JENKINS_COMMON.adoc b/docs-sources/src/main/asciidoc/JENKINS_COMMON.adoc index ba408198..627d4b93 100644 --- a/docs-sources/src/main/asciidoc/JENKINS_COMMON.adoc +++ b/docs-sources/src/main/asciidoc/JENKINS_COMMON.adoc @@ -59,7 +59,7 @@ executing `tools/deploy-infra.sh`. Example for deploying to Artifactory at IP `1 ---- git clone https://github.com/spring-cloud/spring-cloud-pipelines cd spring-cloud-pipelines/ -ARTIFACTORY_URL="http://192.168.99.100:8081/artifactory/libs-release-local" ./tools/deploy-infra.sh +ARTIFACTORY_URL="https://192.168.99.100:8081/artifactory/libs-release-local" ./tools/deploy-infra.sh ---- [[setup-settings-xml]] diff --git a/docs-sources/src/main/asciidoc/K8S_DEMO.adoc b/docs-sources/src/main/asciidoc/K8S_DEMO.adoc index 5666c00d..f03e6f8b 100644 --- a/docs-sources/src/main/asciidoc/K8S_DEMO.adoc +++ b/docs-sources/src/main/asciidoc/K8S_DEMO.adoc @@ -113,7 +113,7 @@ alertmanager: ## alertmanager data Persistent Volume access modes ## Must match those of existing PV or dynamic provisioner - ## Ref: http://kubernetes.io/docs/user-guide/persistent-volumes/ + ## Ref: https://kubernetes.io/docs/user-guide/persistent-volumes/ ## accessModes: - ReadWriteOnce @@ -156,7 +156,7 @@ alertmanager: replicaCount: 1 ## alertmanager resource requests and limits - ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/ + ## Ref: https://kubernetes.io/docs/user-guide/compute-resources/ ## resources: {} # limits: @@ -198,7 +198,7 @@ configmapReload: pullPolicy: IfNotPresent ## configmap-reload resource requests and limits - ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/ + ## Ref: https://kubernetes.io/docs/user-guide/compute-resources/ ## resources: {} @@ -233,7 +233,7 @@ kubeStateMetrics: replicaCount: 1 ## kube-state-metrics resource requests and limits - ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/ + ## Ref: https://kubernetes.io/docs/user-guide/compute-resources/ ## resources: {} # limits: @@ -430,7 +430,7 @@ server: ## Prometheus server data Persistent Volume access modes ## Must match those of existing PV or dynamic provisioner - ## Ref: http://kubernetes.io/docs/user-guide/persistent-volumes/ + ## Ref: https://kubernetes.io/docs/user-guide/persistent-volumes/ ## accessModes: - ReadWriteOnce @@ -474,7 +474,7 @@ server: replicaCount: 1 ## Prometheus server resource requests and limits - ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/ + ## Ref: https://kubernetes.io/docs/user-guide/compute-resources/ ## resources: {} # limits: @@ -564,7 +564,7 @@ pushgateway: replicaCount: 1 ## pushgateway resource requests and limits - ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/ + ## Ref: https://kubernetes.io/docs/user-guide/compute-resources/ ## resources: {} # limits: @@ -884,7 +884,7 @@ NOTES: ---- Perform the aforementioned steps and add the Grafana's datasource -as Prometheus with URL `http://sc-pipelines-prometheus-prometheus-server.default.svc.cluster.local` +as Prometheus with URL `https://sc-pipelines-prometheus-prometheus-server.default.svc.cluster.local` You can pick the dashboard via the Grafana ID (2471). This is the default dashboard for the Spring Cloud Pipelines demo apps. diff --git a/docs-sources/src/main/asciidoc/TECH.adoc b/docs-sources/src/main/asciidoc/TECH.adoc index fbe79d54..8c245e11 100644 --- a/docs-sources/src/main/asciidoc/TECH.adoc +++ b/docs-sources/src/main/asciidoc/TECH.adoc @@ -2,7 +2,7 @@ === Prerequisites -As prerequisites you need to have http://www.shellcheck.net/[shellcheck], +As prerequisites you need to have https://www.shellcheck.net/[shellcheck], https://github.com/sstephenson/bats[bats], https://stedolan.github.io/jq/[jq] and https://rubyinstaller.org/downloads/[ruby] installed. If you're on a Linux machine then `bats` and `shellcheck` will be installed for you. diff --git a/docs-sources/src/main/jekyll/_config.yml b/docs-sources/src/main/jekyll/_config.yml index 7e9f67e6..9532dc4c 100644 --- a/docs-sources/src/main/jekyll/_config.yml +++ b/docs-sources/src/main/jekyll/_config.yml @@ -26,10 +26,10 @@ name: Spring Cloud Pipelines project: spring-cloud-pipelines # Project github URL -github_repo_url: http://github.com/spring-cloud/spring-cloud-pipelines +github_repo_url: https://github.com/spring-cloud/spring-cloud-pipelines # Project forum URL -forum: http://stackoverflow.com/questions/tagged/spring-cloud +forum: https://stackoverflow.com/questions/tagged/spring-cloud # If you want to include a custom pom.xml or gradle template set these value to true and add _include files custom_pom_template: true diff --git a/docs-sources/src/main/jekyll/_includes/download_widget.md b/docs-sources/src/main/jekyll/_includes/download_widget.md index 8bd1ec8d..03d7bbb5 100644 --- a/docs-sources/src/main/jekyll/_includes/download_widget.md +++ b/docs-sources/src/main/jekyll/_includes/download_widget.md @@ -10,8 +10,8 @@ Download
The recommended way to get started using {{ site.project }}
in
your project is with a dependency management system – the snippet below can
be copied and pasted into your build. Need help? See our getting started guides
- on building with Maven and
- Gradle.
+ on building with Maven and
+ Gradle.
The recommended way to get started using {{ site.project }}
in
your project is with a dependency management system – the snippet below can
be copied and pasted into your build. Need help? See our getting started guides
- on building with Maven and
- Gradle.
+ on building with Maven and
+ Gradle.
As prerequisites you need to have shellcheck, +
As prerequisites you need to have shellcheck,
bats, jq
and ruby installed. If you’re on a Linux
machine then bats
and shellcheck
will be installed for you.
To install the required software on Linux just type the following commands
$ sudo apt-get install -y ruby jq
If you’re on a Mac then just execute these commands to install the missing software
$ brew install jq diff --git a/docs/multi/multi__introduction.html b/docs/multi/multi__introduction.html index 7c48a0d0..0888a37a 100644 --- a/docs/multi/multi__introduction.html +++ b/docs/multi/multi__introduction.html @@ -76,7 +76,7 @@ anytime before deployment to production.
One of the possibilities of tackling these problems is to… not do end to end tests.
If we stub out all the dependencies of our application then most of the problems presented above disappear. There is no need to start and setup infrastructure required by the dependant microservices. That way the testing setup looks like this:
Such an approach to testing and deployment gives the following benefits -(thanks to the usage of Spring Cloud Contract):
It brings however the following challenges:
Like every solution it has its benefits and drawbacks. The opinionated pipeline +(thanks to the usage of Spring Cloud Contract):
It brings however the following challenges:
Like every solution it has its benefits and drawbacks. The opinionated pipeline allows you to configure whether you want to follow this flow or not.
The general view behind this deployment pipeline is to:
Obviously the pipeline could have been split to more steps but it seems that all of the aforementioned actions comprise nicely in our opinionated proposal.
Spring Cloud Pipelines uses Bash scripts extensively. Below you can find the list of software that needs to be installed on a CI server worker for the build to pass.
![]() | Tip |
---|---|
In the demo setup all of these libraries are already installed. |
apt-get -y install \ diff --git a/docs/multi/multi__jenkins_pipeline_common.html b/docs/multi/multi__jenkins_pipeline_common.html index 0b02713b..614815b8 100644 --- a/docs/multi/multi__jenkins_pipeline_common.html +++ b/docs/multi/multi__jenkins_pipeline_common.html @@ -26,7 +26,7 @@ when you want to do some custom changes.It’s enough to set the
ARTIFACTORY_URL
environmental variable before executingtools/deploy-infra.sh
. Example for deploying to Artifactory at IP192.168.99.100
git clone https://github.com/spring-cloud/spring-cloud-pipelines cd spring-cloud-pipelines/ -ARTIFACTORY_URL="http://192.168.99.100:8081/artifactory/libs-release-local" ./tools/deploy-infra.sh
Tip If you want to use the default connection to the Docker version +ARTIFACTORY_URL="https://192.168.99.100:8081/artifactory/libs-release-local" ./tools/deploy-infra.sh
Tip If you want to use the default connection to the Docker version of Artifactory you can skip this step
So that
./mvnw deploy
works with Artifactory from Docker we’re already copying the missingsettings.xml
file for you. It looks more or less like this:<?xml version="1.0" encoding="UTF-8"?> <settings> diff --git a/docs/multi/multi__the_demo_setup_kubernetes.html b/docs/multi/multi__the_demo_setup_kubernetes.html index 73a6665a..2e5ffbcf 100644 --- a/docs/multi/multi__the_demo_setup_kubernetes.html +++ b/docs/multi/multi__the_demo_setup_kubernetes.html @@ -101,7 +101,7 @@ ## alertmanager data Persistent Volume access modes ## Must match those of existing PV or dynamic provisioner - ## Ref: http://kubernetes.io/docs/user-guide/persistent-volumes/ + ## Ref: https://kubernetes.io/docs/user-guide/persistent-volumes/ ## accessModes: - ReadWriteOnce @@ -144,7 +144,7 @@ replicaCount: 1 ## alertmanager resource requests and limits - ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/ + ## Ref: https://kubernetes.io/docs/user-guide/compute-resources/ ## resources: {} # limits: @@ -186,7 +186,7 @@ pullPolicy: IfNotPresent ## configmap-reload resource requests and limits - ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/ + ## Ref: https://kubernetes.io/docs/user-guide/compute-resources/ ## resources: {} @@ -221,7 +221,7 @@ replicaCount: 1 ## kube-state-metrics resource requests and limits - ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/ + ## Ref: https://kubernetes.io/docs/user-guide/compute-resources/ ## resources: {} # limits: @@ -418,7 +418,7 @@ ## Prometheus server data Persistent Volume access modes ## Must match those of existing PV or dynamic provisioner - ## Ref: http://kubernetes.io/docs/user-guide/persistent-volumes/ + ## Ref: https://kubernetes.io/docs/user-guide/persistent-volumes/ ## accessModes: - ReadWriteOnce @@ -462,7 +462,7 @@ replicaCount: 1 ## Prometheus server resource requests and limits - ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/ + ## Ref: https://kubernetes.io/docs/user-guide/compute-resources/ ## resources: {} # limits: @@ -552,7 +552,7 @@ replicaCount: 1 ## pushgateway resource requests and limits - ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/ + ## Ref: https://kubernetes.io/docs/user-guide/compute-resources/ ## resources: {} # limits: @@ -842,7 +842,7 @@ kubectl --namespace default port-forward $POD_NAME 3000 3. Login with the password from step 1 and the username: adminPerform the aforementioned steps and add the Grafana’s datasource -as Prometheus with URL
http://sc-pipelines-prometheus-prometheus-server.default.svc.cluster.local
You can pick the dashboard via the Grafana ID (2471). This is the +as Prometheus with URL
https://sc-pipelines-prometheus-prometheus-server.default.svc.cluster.local
You can pick the dashboard via the Grafana ID (2471). This is the default dashboard for the Spring Cloud Pipelines demo apps.
If you have both apps (
github-webhook
andgithub-analytics
) running on production we can now trigger the messages. Download the JSON with a sample request from the github-webhook repository. diff --git a/docs/multi/multi_concourse-pipeline-cf.html b/docs/multi/multi_concourse-pipeline-cf.html index af082d78..20a2474a 100644 --- a/docs/multi/multi_concourse-pipeline-cf.html +++ b/docs/multi/multi_concourse-pipeline-cf.html @@ -30,7 +30,7 @@
You can click one of the icons (depending on your OS) to download
fly
, which is the Concourse CLI. Once you’ve downloaded that (and maybe added to your PATH) you can run:fly --versionIf
fly
is properly installed then it should print out the version.The repo comes with
credentials-sample-cf.yml
which is set up with sample data (most credentials) are set to be applicable for PCF Dev. Copy this file to a new filecredentials.yml
(the file is added to .gitignore so don’t worry that you’ll push it with your passwords) and edit it as you wish. For our demo just setup:
app-url
- url pointing to your forkedgithub-webhook
repogithub-private-key
- your private key to clone / tag GitHub reposrepo-with-binaries
- the IP is set to the defaults for Docker Machine. You should update it to point to your setupIf you don’t have a Docker Machine just execute
./whats_my_ip.sh
script to get an external IP that you can pass to yourrepo-with-binaries
instead of the default -Docker Machine IP.Below you can see what environment variables are required by the scripts. To the right hand side you can see the default values for PCF Dev that we set in the
credentials-sample-cf.yml
.
Property Name Property Description Default value BUILD_OPTIONS
Additional options you would like to pass to the Maven / Gradle build
PAAS_TEST_API_URL
The URL to the CF Api for TEST env
api.local.pcfdev.io
PAAS_STAGE_API_URL
The URL to the CF Api for STAGE env
api.local.pcfdev.io
PAAS_PROD_API_URL
The URL to the CF Api for PROD env
api.local.pcfdev.io
PAAS_TEST_ORG
Name of the org for the test env
pcfdev-org
PAAS_TEST_SPACE
Name of the space for the test env
pcfdev-space
PAAS_STAGE_ORG
Name of the org for the stage env
pcfdev-org
PAAS_STAGE_SPACE
Name of the space for the stage env
pcfdev-space
PAAS_PROD_ORG
Name of the org for the prod env
pcfdev-org
PAAS_PROD_SPACE
Name of the space for the prod env
pcfdev-space
REPO_WITH_BINARIES
URL to repo with the deployed jars
M2_SETTINGS_REPO_ID
The id of server from Maven settings.xml
artifactory-local
PAAS_HOSTNAME_UUID
Additional suffix for the route. In a shared environment the default routes can be already taken
APP_MEMORY_LIMIT
How much memory should be used by the infra apps (Eureka, Stub Runner etc.)
256m
JAVA_BUILDPACK_URL
The URL to the Java buildpack to be used by CF
Log in (e.g. for Concourse running at
192.168.99.100
- if you don’t provide any value thenlocalhost
is assumed). If you execute this script (it assumes that eitherfly
is on yourPATH
or it’s in the same folder as the script is):./login.sh 192.168.99.100Next run the command to create the pipeline.
./set_pipeline.shThen you’ll create a
github-webhook
pipeline under thedocker
alias, using the providedcredentials.yml
file. +Docker Machine IP.Below you can see what environment variables are required by the scripts. To the right hand side you can see the default values for PCF Dev that we set in the
credentials-sample-cf.yml
.
Property Name Property Description Default value BUILD_OPTIONS
Additional options you would like to pass to the Maven / Gradle build
PAAS_TEST_API_URL
The URL to the CF Api for TEST env
api.local.pcfdev.io
PAAS_STAGE_API_URL
The URL to the CF Api for STAGE env
api.local.pcfdev.io
PAAS_PROD_API_URL
The URL to the CF Api for PROD env
api.local.pcfdev.io
PAAS_TEST_ORG
Name of the org for the test env
pcfdev-org
PAAS_TEST_SPACE
Name of the space for the test env
pcfdev-space
PAAS_STAGE_ORG
Name of the org for the stage env
pcfdev-org
PAAS_STAGE_SPACE
Name of the space for the stage env
pcfdev-space
PAAS_PROD_ORG
Name of the org for the prod env
pcfdev-org
PAAS_PROD_SPACE
Name of the space for the prod env
pcfdev-space
REPO_WITH_BINARIES
URL to repo with the deployed jars
M2_SETTINGS_REPO_ID
The id of server from Maven settings.xml
artifactory-local
PAAS_HOSTNAME_UUID
Additional suffix for the route. In a shared environment the default routes can be already taken
APP_MEMORY_LIMIT
How much memory should be used by the infra apps (Eureka, Stub Runner etc.)
256m
JAVA_BUILDPACK_URL
The URL to the Java buildpack to be used by CF
Log in (e.g. for Concourse running at
192.168.99.100
- if you don’t provide any value thenlocalhost
is assumed). If you execute this script (it assumes that eitherfly
is on yourPATH
or it’s in the same folder as the script is):./login.sh 192.168.99.100Next run the command to create the pipeline.
./set_pipeline.shThen you’ll create a
github-webhook
pipeline under thedocker
alias, using the providedcredentials.yml
file. You can override these values in exactly that order (e.g../set-pipeline.sh some-project another-target some-other-credentials.yml
)
diff --git a/docs/multi/multi_jenkins-pipeline-cf.html b/docs/multi/multi_jenkins-pipeline-cf.html index 1bd3f55e..a02f765a 100644 --- a/docs/multi/multi_jenkins-pipeline-cf.html +++ b/docs/multi/multi_jenkins-pipeline-cf.html @@ -53,7 +53,7 @@
You can also use the declarative pipeline approach with the Blue Ocean UI. Here is a step by step guide to run a pipeline via -this approach.
The Blue Ocean UI is available under the
blue/
URL. E.g. for Docker Machine based setuphttp://192.168.99.100:8080/blue
.+this approach.
The Blue Ocean UI is available under the
blue/
URL. E.g. for Docker Machine based setuphttps://192.168.99.100:8080/blue
.
@@ -65,7 +65,7 @@ check out this issue for more information
Warning Currently there is no way to introduce manual steps in a performant way. Jenkins is blocking an executor when manual step is required. That means that you’ll run out of executors pretty fast. You can check out this issue for -and this StackOverflow question +and this StackOverflow question for more information.
All the steps below are not necessary to run the demo. They are needed only when you want to do some custom changes.The env vars that are used in all of the jobs are as follows:
Property Name Property Description Default value BINARY_EXTENSION
Extension of the binary uploaded to Artifactory / Nexus. Example: change this to
war
for WAR artifactsPAAS_TEST_API_URL
The URL to the CF Api for TEST env
api.local.pcfdev.io
PAAS_STAGE_API_URL
The URL to the CF Api for STAGE env
api.local.pcfdev.io
PAAS_PROD_API_URL
The URL to the CF Api for PROD env
api.local.pcfdev.io
PAAS_TEST_ORG
Name of the org for the test env
pcfdev-org
PAAS_TEST_SPACE
Name of the space for the test env
pcfdev-space
PAAS_STAGE_ORG
Name of the org for the stage env
pcfdev-org
PAAS_STAGE_SPACE
Name of the space for the stage env
pcfdev-space
PAAS_PROD_ORG
Name of the org for the prod env
pcfdev-org
PAAS_PROD_SPACE
Name of the space for the prod env
pcfdev-space
REPO_WITH_BINARIES
URL to repo with the deployed jars
M2_SETTINGS_REPO_ID
The id of server from Maven settings.xml
artifactory-local
JDK_VERSION
The name of the JDK installation
jdk8
PIPELINE_VERSION
What should be the version of the pipeline (ultimately also version of the jar)
1.0.0.M1-${GROOVY,script ="new Date().format('yyMMdd_HHmmss')"}-VERSION
GIT_EMAIL
The email used by Git to tag repo
GIT_NAME
The name used by Git to tag repo
Pivo Tal
PAAS_HOSTNAME_UUID
Additional suffix for the route. In a shared environment the default routes can be already taken
AUTO_DEPLOY_TO_STAGE
Should deployment to stage be automatic
false
AUTO_DEPLOY_TO_PROD
Should deployment to prod be automatic
false
API_COMPATIBILITY_STEP_REQUIRED
Should api compatibility step be required
true
DB_ROLLBACK_STEP_REQUIRED
Should DB rollback step be present
true
DEPLOY_TO_STAGE_STEP_REQUIRED
Should deploy to stage step be present
true
APP_MEMORY_LIMIT
How much memory should be used by the infra apps (Eureka, Stub Runner etc.)
256m
JAVA_BUILDPACK_URL
The URL to the Java buildpack to be used by CF
BUILD_OPTIONS
Additional options you would like to pass to the Maven / Gradle build
In your scripts we reference the credentials via IDs. These are the defaults for credentials
Property Name Property Description Default value PAAS_PROD_CREDENTIAL_ID
Credential ID for CF Prod env access
cf-prod
GIT_CREDENTIAL_ID
Credential ID used to tag a git repo
git
GIT_SSH_CREDENTIAL_ID
SSH credential ID used to tag a git repo
gitSsh
GIT_USE_SSH_KEY
if
true
will pick to use the SSH credential idfalse
REPO_WITH_BINARIES_CREDENTIAL_ID
Credential ID used for the repo with jars
repo-with-binaries
PAAS_TEST_CREDENTIAL_ID
Credential ID for CF Test env access
cf-test
PAAS_STAGE_CREDENTIAL_ID
Credential ID for CF Stage env access
cf-stage
If you already have in your system a credential to for example tag a repo you can use it by passing the value of the property
GIT_CREDENTIAL_ID
Tip Check out the
cf-helper
script for all the configuration options!
You can also use the declarative pipeline approach with the Blue Ocean UI. Here is a step by step guide to run a pipeline via -this approach.
The Blue Ocean UI is available under the blue/
URL. E.g. for Docker Machine based setup http://192.168.99.100:8080/blue
.
+this approach.
The Blue Ocean UI is available under the blue/
URL. E.g. for Docker Machine based setup https://192.168.99.100:8080/blue
.
@@ -57,7 +57,7 @@ check out this issue for more information
![]() | Warning |
---|---|
Currently there is no way to introduce manual steps in a performant way. Jenkins is blocking an executor when manual step is required. That means that you’ll run out of executors pretty fast. You can check out this issue for -and this StackOverflow question +and this StackOverflow question for more information. |
![]() | Important |
---|---|
All the steps below are not necessary to run the demo. They are needed only when you want to do some custom changes. |
The env vars that are used in all of the jobs are as follows:
Property Name | Property Description | Default value |
---|---|---|
BUILD_OPTIONS | Additional options you would like to pass to the Maven / Gradle build | |
DOCKER_REGISTRY_ORGANIZATION | Name of the docker organization to which Docker images should be deployed | scpipelines |
DOCKER_REGISTRY_CREDENTIAL_ID | Credential ID used to push Docker images | docker-registry |
DOCKER_SERVER_ID | Server ID in | docker-repo |
DOCKER_EMAIL | Email used to connect to Docker registry` and Maven builds | |
DOCKER_REGISTRY_ORGANIZATION | URL to Kubernetes cluster for test env | scpipelines |
DOCKER_REGISTRY_URL | URL to the docker registry | |
PAAS_TEST_API_URL | URL of the API of the Kubernetes cluster for test environment | 192.168.99.100:8443 |
PAAS_STAGE_API_URL | URL of the API of the Kubernetes cluster for stage environment | 192.168.99.100:8443 |
PAAS_PROD_API_URL | URL of the API of the Kubernetes cluster for prod environment | 192.168.99.100:8443 |
PAAS_TEST_CA_PATH | Path to the certificate authority for test environment | /usr/share/jenkins/cert/ca.crt |
PAAS_STAGE_CA_PATH | Path to the certificate authority for stage environment | /usr/share/jenkins/cert/ca.crt |
PAAS_PROD_CA_PATH | Path to the certificate authority for prod environment | /usr/share/jenkins/cert/ca.crt |
PAAS_TEST_CLIENT_CERT_PATH | Path to the client certificate for test environment | /usr/share/jenkins/cert/apiserver.crt |
PAAS_STAGE_CLIENT_CERT_PATH | Path to the client certificate for stage environment | /usr/share/jenkins/cert/apiserver.crt |
PAAS_PROD_CLIENT_CERT_PATH | Path to the client certificate for prod environment | /usr/share/jenkins/cert/apiserver.crt |
PAAS_TEST_CLIENT_KEY_PATH | Path to the client key for test environment | /usr/share/jenkins/cert/apiserver.key |
PAAS_STAGE_CLIENT_KEY_PATH | Path to the client key for stage environment | /usr/share/jenkins/cert/apiserver.key |
PAAS_PROD_CLIENT_KEY_PATH | Path to the client key for test environment | /usr/share/jenkins/cert/apiserver.key |
PAAS_TEST_CLIENT_TOKEN_PATH | Path to the file containing the token for test env | |
PAAS_STAGE_CLIENT_TOKEN_PATH | Path to the file containing the token for stage env | |
PAAS_PROD_CLIENT_TOKEN_PATH | Path to the file containing the token for prod env | |
PAAS_TEST_CLIENT_TOKEN_ID | ID of the credential containing access token for test environment | |
PAAS_STAGE_CLIENT_TOKEN_ID | ID of the credential containing access token for stage environment | |
PAAS_PROD_CLIENT_TOKEN_ID | ID of the credential containing access token for prod environment | |
PAAS_TEST_CLUSTER_NAME | Name of the cluster for test environment | minikube |
PAAS_STAGE_CLUSTER_NAME | Name of the cluster for stage environment | minikube |
PAAS_PROD_CLUSTER_NAME | Name of the cluster for prod environment | minikube |
PAAS_TEST_CLUSTER_USERNAME | Name of the user for test environment | minikube |
PAAS_STAGE_CLUSTER_USERNAME | Name of the user for stage environment | minikube |
PAAS_PROD_CLUSTER_USERNAME | Name of the user for prod environment | minikube |
PAAS_TEST_SYSTEM_NAME | Name of the system for test environment | minikube |
PAAS_STAGE_SYSTEM_NAME | Name of the system for stage environment | minikube |
PAAS_PROD_SYSTEM_NAME | Name of the system for prod environment | minikube |
PAAS_TEST_NAMESPACE | Namespace for test environment | sc-pipelines-test |
PAAS_STAGE_NAMESPACE | Namespace for stage environment | sc-pipelines-stage |
PAAS_PROD_NAMESPACE | Namespace for prod environment | sc-pipelines-prod |
KUBERNETES_MINIKUBE | Will you connect to Minikube? | true |
REPO_WITH_BINARIES | URL to repo with the deployed jars | |
REPO_WITH_BINARIES_CREDENTIAL_ID | Credential ID used for the repo with jars | repo-with-binaries |
M2_SETTINGS_REPO_ID | The id of server from Maven settings.xml | artifactory-local |
JDK_VERSION | The name of the JDK installation | jdk8 |
PIPELINE_VERSION | What should be the version of the pipeline (ultimately also version of the jar) | 1.0.0.M1-${GROOVY,script ="new Date().format('yyMMdd_HHmmss')"}-VERSION |
GIT_EMAIL | The email used by Git to tag repo | |
GIT_NAME | The name used by Git to tag repo | Pivo Tal |
AUTO_DEPLOY_TO_STAGE | Should deployment to stage be automatic | false |
AUTO_DEPLOY_TO_PROD | Should deployment to prod be automatic | false |
API_COMPATIBILITY_STEP_REQUIRED | Should api compatibility step be required | true |
DB_ROLLBACK_STEP_REQUIRED | Should DB rollback step be present | true |
DEPLOY_TO_STAGE_STEP_REQUIRED | Should deploy to stage step be present | true |
![]() | Important |
---|---|
Skip this step if you’re not using GCE |
In order to use GCE we need to have gcloud
running. If you already have the
CLI installed, skip this step. If not just execute to have the CLI
diff --git a/docs/single/spring-cloud-pipelines.html b/docs/single/spring-cloud-pipelines.html
index 6a50a98d..22072eaa 100644
--- a/docs/single/spring-cloud-pipelines.html
+++ b/docs/single/spring-cloud-pipelines.html
@@ -80,7 +80,7 @@
anytime before deployment to production.
One of the possibilities of tackling these problems is to… not do end to end tests.
If we stub out all the dependencies of our application then most of the problems presented above disappear. There is no need to start and setup infrastructure required by the dependant microservices. That way the testing setup looks like this:
Such an approach to testing and deployment gives the following benefits -(thanks to the usage of Spring Cloud Contract):
It brings however the following challenges:
Like every solution it has its benefits and drawbacks. The opinionated pipeline +(thanks to the usage of Spring Cloud Contract):
It brings however the following challenges:
Like every solution it has its benefits and drawbacks. The opinionated pipeline allows you to configure whether you want to follow this flow or not.
The general view behind this deployment pipeline is to:
Obviously the pipeline could have been split to more steps but it seems that all of the aforementioned actions comprise nicely in our opinionated proposal.
Spring Cloud Pipelines uses Bash scripts extensively. Below you can find the list of software that needs to be installed on a CI server worker for the build to pass.
![]() | Tip |
---|---|
In the demo setup all of these libraries are already installed. |
apt-get -y install \ @@ -212,7 +212,7 @@
You can click one of the icons (depending on your OS) to download
fly
, which is the Concourse CLI. Once you’ve downloaded that (and maybe added to your PATH) you can run:fly --versionIf
fly
is properly installed then it should print out the version.
The repo comes with credentials-sample-cf.yml
which is set up with sample data (most credentials) are set to be applicable for PCF Dev. Copy this file to a new file credentials.yml
(the file is added to .gitignore so don’t worry that you’ll push it with your passwords) and edit it as you wish. For our demo just setup:
app-url
- url pointing to your forked github-webhook
repogithub-private-key
- your private key to clone / tag GitHub reposrepo-with-binaries
- the IP is set to the defaults for Docker Machine. You should update it to point to your setupIf you don’t have a Docker Machine just execute ./whats_my_ip.sh
script to
get an external IP that you can pass to your repo-with-binaries
instead of the default
-Docker Machine IP.
Below you can see what environment variables are required by the scripts. To the right hand side you can see the default values for PCF Dev that we set in the credentials-sample-cf.yml
.
Property Name | Property Description | Default value |
---|---|---|
BUILD_OPTIONS | Additional options you would like to pass to the Maven / Gradle build | |
PAAS_TEST_API_URL | The URL to the CF Api for TEST env | api.local.pcfdev.io |
PAAS_STAGE_API_URL | The URL to the CF Api for STAGE env | api.local.pcfdev.io |
PAAS_PROD_API_URL | The URL to the CF Api for PROD env | api.local.pcfdev.io |
PAAS_TEST_ORG | Name of the org for the test env | pcfdev-org |
PAAS_TEST_SPACE | Name of the space for the test env | pcfdev-space |
PAAS_STAGE_ORG | Name of the org for the stage env | pcfdev-org |
PAAS_STAGE_SPACE | Name of the space for the stage env | pcfdev-space |
PAAS_PROD_ORG | Name of the org for the prod env | pcfdev-org |
PAAS_PROD_SPACE | Name of the space for the prod env | pcfdev-space |
REPO_WITH_BINARIES | URL to repo with the deployed jars | |
M2_SETTINGS_REPO_ID | The id of server from Maven settings.xml | artifactory-local |
PAAS_HOSTNAME_UUID | Additional suffix for the route. In a shared environment the default routes can be already taken | |
APP_MEMORY_LIMIT | How much memory should be used by the infra apps (Eureka, Stub Runner etc.) | 256m |
JAVA_BUILDPACK_URL | The URL to the Java buildpack to be used by CF |
Log in (e.g. for Concourse running at 192.168.99.100
- if you don’t provide any value then localhost
is assumed). If you execute this script (it assumes that either fly
is on your PATH
or it’s in the same folder as the script is):
./login.sh 192.168.99.100
Next run the command to create the pipeline.
./set_pipeline.sh
Then you’ll create a github-webhook
pipeline under the docker
alias, using the provided credentials.yml
file.
+Docker Machine IP.
Below you can see what environment variables are required by the scripts. To the right hand side you can see the default values for PCF Dev that we set in the credentials-sample-cf.yml
.
Property Name | Property Description | Default value |
---|---|---|
BUILD_OPTIONS | Additional options you would like to pass to the Maven / Gradle build | |
PAAS_TEST_API_URL | The URL to the CF Api for TEST env | api.local.pcfdev.io |
PAAS_STAGE_API_URL | The URL to the CF Api for STAGE env | api.local.pcfdev.io |
PAAS_PROD_API_URL | The URL to the CF Api for PROD env | api.local.pcfdev.io |
PAAS_TEST_ORG | Name of the org for the test env | pcfdev-org |
PAAS_TEST_SPACE | Name of the space for the test env | pcfdev-space |
PAAS_STAGE_ORG | Name of the org for the stage env | pcfdev-org |
PAAS_STAGE_SPACE | Name of the space for the stage env | pcfdev-space |
PAAS_PROD_ORG | Name of the org for the prod env | pcfdev-org |
PAAS_PROD_SPACE | Name of the space for the prod env | pcfdev-space |
REPO_WITH_BINARIES | URL to repo with the deployed jars | |
M2_SETTINGS_REPO_ID | The id of server from Maven settings.xml | artifactory-local |
PAAS_HOSTNAME_UUID | Additional suffix for the route. In a shared environment the default routes can be already taken | |
APP_MEMORY_LIMIT | How much memory should be used by the infra apps (Eureka, Stub Runner etc.) | 256m |
JAVA_BUILDPACK_URL | The URL to the Java buildpack to be used by CF |
Log in (e.g. for Concourse running at 192.168.99.100
- if you don’t provide any value then localhost
is assumed). If you execute this script (it assumes that either fly
is on your PATH
or it’s in the same folder as the script is):
./login.sh 192.168.99.100
Next run the command to create the pipeline.
./set_pipeline.sh
Then you’ll create a github-webhook
pipeline under the docker
alias, using the provided credentials.yml
file.
You can override these values in exactly that order (e.g. ./set-pipeline.sh some-project another-target some-other-credentials.yml
)
@@ -335,7 +335,7 @@ when you want to do some custom changes.
It’s enough to set the ARTIFACTORY_URL
environmental variable before
executing tools/deploy-infra.sh
. Example for deploying to Artifactory at IP 192.168.99.100
git clone https://github.com/spring-cloud/spring-cloud-pipelines cd spring-cloud-pipelines/ -ARTIFACTORY_URL="http://192.168.99.100:8081/artifactory/libs-release-local" ./tools/deploy-infra.sh
![]() | Tip | ||
---|---|---|---|
If you want to use the default connection to the Docker version +ARTIFACTORY_URL="https://192.168.99.100:8081/artifactory/libs-release-local" ./tools/deploy-infra.sh
So that <?xml version="1.0" encoding="UTF-8"?> <settings> @@ -435,7 +435,7 @@ You can also use the declarative pipeline approach with the Blue Ocean UI. Here is a step by step guide to run a pipeline via -this approach. The Blue Ocean UI is available under the +this approach. The Blue Ocean UI is available under the
@@ -447,7 +447,7 @@ check out this issue for more information |
![]() | Warning |
---|---|
Currently there is no way to introduce manual steps in a performant way. Jenkins is blocking an executor when manual step is required. That means that you’ll run out of executors pretty fast. You can check out this issue for -and this StackOverflow question +and this StackOverflow question for more information. |
All the steps below are not necessary to run the demo. They are needed only when you want to do some custom changes.
The env vars that are used in all of the jobs are as follows:
Property Name | Property Description | Default value |
---|---|---|
BINARY_EXTENSION | Extension of the binary uploaded to Artifactory / Nexus. Example: change this to | |
PAAS_TEST_API_URL | The URL to the CF Api for TEST env | api.local.pcfdev.io |
PAAS_STAGE_API_URL | The URL to the CF Api for STAGE env | api.local.pcfdev.io |
PAAS_PROD_API_URL | The URL to the CF Api for PROD env | api.local.pcfdev.io |
PAAS_TEST_ORG | Name of the org for the test env | pcfdev-org |
PAAS_TEST_SPACE | Name of the space for the test env | pcfdev-space |
PAAS_STAGE_ORG | Name of the org for the stage env | pcfdev-org |
PAAS_STAGE_SPACE | Name of the space for the stage env | pcfdev-space |
PAAS_PROD_ORG | Name of the org for the prod env | pcfdev-org |
PAAS_PROD_SPACE | Name of the space for the prod env | pcfdev-space |
REPO_WITH_BINARIES | URL to repo with the deployed jars | |
M2_SETTINGS_REPO_ID | The id of server from Maven settings.xml | artifactory-local |
JDK_VERSION | The name of the JDK installation | jdk8 |
PIPELINE_VERSION | What should be the version of the pipeline (ultimately also version of the jar) | 1.0.0.M1-${GROOVY,script ="new Date().format('yyMMdd_HHmmss')"}-VERSION |
GIT_EMAIL | The email used by Git to tag repo | |
GIT_NAME | The name used by Git to tag repo | Pivo Tal |
PAAS_HOSTNAME_UUID | Additional suffix for the route. In a shared environment the default routes can be already taken | |
AUTO_DEPLOY_TO_STAGE | Should deployment to stage be automatic | false |
AUTO_DEPLOY_TO_PROD | Should deployment to prod be automatic | false |
API_COMPATIBILITY_STEP_REQUIRED | Should api compatibility step be required | true |
DB_ROLLBACK_STEP_REQUIRED | Should DB rollback step be present | true |
DEPLOY_TO_STAGE_STEP_REQUIRED | Should deploy to stage step be present | true |
APP_MEMORY_LIMIT | How much memory should be used by the infra apps (Eureka, Stub Runner etc.) | 256m |
JAVA_BUILDPACK_URL | The URL to the Java buildpack to be used by CF | |
BUILD_OPTIONS | Additional options you would like to pass to the Maven / Gradle build |
In your scripts we reference the credentials via IDs. These are the defaults for credentials
Property Name | Property Description | Default value |
---|---|---|
PAAS_PROD_CREDENTIAL_ID | Credential ID for CF Prod env access | cf-prod |
GIT_CREDENTIAL_ID | Credential ID used to tag a git repo | git |
GIT_SSH_CREDENTIAL_ID | SSH credential ID used to tag a git repo | gitSsh |
GIT_USE_SSH_KEY | if | false |
REPO_WITH_BINARIES_CREDENTIAL_ID | Credential ID used for the repo with jars | repo-with-binaries |
PAAS_TEST_CREDENTIAL_ID | Credential ID for CF Test env access | cf-test |
PAAS_STAGE_CREDENTIAL_ID | Credential ID for CF Stage env access | cf-stage |
If you already have in your system a credential to for example tag a repo
you can use it by passing the value of the property GIT_CREDENTIAL_ID
![]() | Tip |
---|---|
Check out the |
![]() | Important |
---|---|
In this chapter we assume that you perform deployment of your application @@ -495,7 +495,7 @@
You can also use the declarative pipeline approach with the Blue Ocean UI. Here is a step by step guide to run a pipeline via -this approach. The Blue Ocean UI is available under the +this approach. The Blue Ocean UI is available under the
@@ -507,7 +507,7 @@ check out this issue for more information |
![]() | Warning |
---|---|
Currently there is no way to introduce manual steps in a performant way. Jenkins is blocking an executor when manual step is required. That means that you’ll run out of executors pretty fast. You can check out this issue for -and this StackOverflow question +and this StackOverflow question for more information. |
![]() | Important |
---|---|
All the steps below are not necessary to run the demo. They are needed only when you want to do some custom changes. |
The env vars that are used in all of the jobs are as follows:
Property Name | Property Description | Default value |
---|---|---|
BUILD_OPTIONS | Additional options you would like to pass to the Maven / Gradle build | |
DOCKER_REGISTRY_ORGANIZATION | Name of the docker organization to which Docker images should be deployed | scpipelines |
DOCKER_REGISTRY_CREDENTIAL_ID | Credential ID used to push Docker images | docker-registry |
DOCKER_SERVER_ID | Server ID in | docker-repo |
DOCKER_EMAIL | Email used to connect to Docker registry` and Maven builds | |
DOCKER_REGISTRY_ORGANIZATION | URL to Kubernetes cluster for test env | scpipelines |
DOCKER_REGISTRY_URL | URL to the docker registry | |
PAAS_TEST_API_URL | URL of the API of the Kubernetes cluster for test environment | 192.168.99.100:8443 |
PAAS_STAGE_API_URL | URL of the API of the Kubernetes cluster for stage environment | 192.168.99.100:8443 |
PAAS_PROD_API_URL | URL of the API of the Kubernetes cluster for prod environment | 192.168.99.100:8443 |
PAAS_TEST_CA_PATH | Path to the certificate authority for test environment | /usr/share/jenkins/cert/ca.crt |
PAAS_STAGE_CA_PATH | Path to the certificate authority for stage environment | /usr/share/jenkins/cert/ca.crt |
PAAS_PROD_CA_PATH | Path to the certificate authority for prod environment | /usr/share/jenkins/cert/ca.crt |
PAAS_TEST_CLIENT_CERT_PATH | Path to the client certificate for test environment | /usr/share/jenkins/cert/apiserver.crt |
PAAS_STAGE_CLIENT_CERT_PATH | Path to the client certificate for stage environment | /usr/share/jenkins/cert/apiserver.crt |
PAAS_PROD_CLIENT_CERT_PATH | Path to the client certificate for prod environment | /usr/share/jenkins/cert/apiserver.crt |
PAAS_TEST_CLIENT_KEY_PATH | Path to the client key for test environment | /usr/share/jenkins/cert/apiserver.key |
PAAS_STAGE_CLIENT_KEY_PATH | Path to the client key for stage environment | /usr/share/jenkins/cert/apiserver.key |
PAAS_PROD_CLIENT_KEY_PATH | Path to the client key for test environment | /usr/share/jenkins/cert/apiserver.key |
PAAS_TEST_CLIENT_TOKEN_PATH | Path to the file containing the token for test env | |
PAAS_STAGE_CLIENT_TOKEN_PATH | Path to the file containing the token for stage env | |
PAAS_PROD_CLIENT_TOKEN_PATH | Path to the file containing the token for prod env | |
PAAS_TEST_CLIENT_TOKEN_ID | ID of the credential containing access token for test environment | |
PAAS_STAGE_CLIENT_TOKEN_ID | ID of the credential containing access token for stage environment | |
PAAS_PROD_CLIENT_TOKEN_ID | ID of the credential containing access token for prod environment | |
PAAS_TEST_CLUSTER_NAME | Name of the cluster for test environment | minikube |
PAAS_STAGE_CLUSTER_NAME | Name of the cluster for stage environment | minikube |
PAAS_PROD_CLUSTER_NAME | Name of the cluster for prod environment | minikube |
PAAS_TEST_CLUSTER_USERNAME | Name of the user for test environment | minikube |
PAAS_STAGE_CLUSTER_USERNAME | Name of the user for stage environment | minikube |
PAAS_PROD_CLUSTER_USERNAME | Name of the user for prod environment | minikube |
PAAS_TEST_SYSTEM_NAME | Name of the system for test environment | minikube |
PAAS_STAGE_SYSTEM_NAME | Name of the system for stage environment | minikube |
PAAS_PROD_SYSTEM_NAME | Name of the system for prod environment | minikube |
PAAS_TEST_NAMESPACE | Namespace for test environment | sc-pipelines-test |
PAAS_STAGE_NAMESPACE | Namespace for stage environment | sc-pipelines-stage |
PAAS_PROD_NAMESPACE | Namespace for prod environment | sc-pipelines-prod |
KUBERNETES_MINIKUBE | Will you connect to Minikube? | true |
REPO_WITH_BINARIES | URL to repo with the deployed jars | |
REPO_WITH_BINARIES_CREDENTIAL_ID | Credential ID used for the repo with jars | repo-with-binaries |
M2_SETTINGS_REPO_ID | The id of server from Maven settings.xml | artifactory-local |
JDK_VERSION | The name of the JDK installation | jdk8 |
PIPELINE_VERSION | What should be the version of the pipeline (ultimately also version of the jar) | 1.0.0.M1-${GROOVY,script ="new Date().format('yyMMdd_HHmmss')"}-VERSION |
GIT_EMAIL | The email used by Git to tag repo | |
GIT_NAME | The name used by Git to tag repo | Pivo Tal |
AUTO_DEPLOY_TO_STAGE | Should deployment to stage be automatic | false |
AUTO_DEPLOY_TO_PROD | Should deployment to prod be automatic | false |
API_COMPATIBILITY_STEP_REQUIRED | Should api compatibility step be required | true |
DB_ROLLBACK_STEP_REQUIRED | Should DB rollback step be present | true |
DEPLOY_TO_STAGE_STEP_REQUIRED | Should deploy to stage step be present | true |
![]() | Important |
---|---|
Skip this step if you’re not using GCE |
In order to use GCE we need to have gcloud
running. If you already have the
CLI installed, skip this step. If not just execute to have the CLI
@@ -783,7 +783,7 @@
## alertmanager data Persistent Volume access modes
## Must match those of existing PV or dynamic provisioner
- ## Ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
+ ## Ref: https://kubernetes.io/docs/user-guide/persistent-volumes/
##
accessModes:
- ReadWriteOnce
@@ -826,7 +826,7 @@
replicaCount: 1
## alertmanager resource requests and limits
- ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/
+ ## Ref: https://kubernetes.io/docs/user-guide/compute-resources/
##
resources: {}
# limits:
@@ -868,7 +868,7 @@
pullPolicy: IfNotPresent
## configmap-reload resource requests and limits
- ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/
+ ## Ref: https://kubernetes.io/docs/user-guide/compute-resources/
##
resources: {}
@@ -903,7 +903,7 @@
replicaCount: 1
## kube-state-metrics resource requests and limits
- ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/
+ ## Ref: https://kubernetes.io/docs/user-guide/compute-resources/
##
resources: {}
# limits:
@@ -1100,7 +1100,7 @@
## Prometheus server data Persistent Volume access modes
## Must match those of existing PV or dynamic provisioner
- ## Ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
+ ## Ref: https://kubernetes.io/docs/user-guide/persistent-volumes/
##
accessModes:
- ReadWriteOnce
@@ -1144,7 +1144,7 @@
replicaCount: 1
## Prometheus server resource requests and limits
- ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/
+ ## Ref: https://kubernetes.io/docs/user-guide/compute-resources/
##
resources: {}
# limits:
@@ -1234,7 +1234,7 @@
replicaCount: 1
## pushgateway resource requests and limits
- ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/
+ ## Ref: https://kubernetes.io/docs/user-guide/compute-resources/
##
resources: {}
# limits:
@@ -1524,14 +1524,14 @@
kubectl --namespace default port-forward $POD_NAME 3000
3. Login with the password from step 1 and the username: admin
Perform the aforementioned steps and add the Grafana’s datasource
-as Prometheus with URL http://sc-pipelines-prometheus-prometheus-server.default.svc.cluster.local
You can pick the dashboard via the Grafana ID (2471). This is the
+as Prometheus with URL https://sc-pipelines-prometheus-prometheus-server.default.svc.cluster.local
You can pick the dashboard via the Grafana ID (2471). This is the default dashboard for the Spring Cloud Pipelines demo apps.
If you have both apps (github-webhook
and github-analytics
) running on production
we can now trigger the messages. Download the JSON with a sample request
from the github-webhook repository.
Next, pick one of the github-webhook
pods and forward its port
locally to a port 9876
like this:
$ kubectl port-forward --namespace=sc-pipelines-prod $( kubectl get pods --namespace=sc-pipelines-prod | grep github-webhook | head -1 | awk '{print $1}' ) 9876:8080
next send a couple of requests (more than 4).
$ curl -X POST http://localhost:9876/ -d @path/to/issue-created.json \ --header "Content-Type: application/json"
Then if you check out Grafana you’ll see that you went above the -threshold.
As prerequisites you need to have shellcheck, +threshold.
As prerequisites you need to have shellcheck,
bats, jq
and ruby installed. If you’re on a Linux
machine then bats
and shellcheck
will be installed for you.
To install the required software on Linux just type the following commands
$ sudo apt-get install -y ruby jq
If you’re on a Mac then just execute these commands to install the missing software
$ brew install jq diff --git a/docs/spring-cloud-pipelines.html b/docs/spring-cloud-pipelines.html index 793c1961..7c525998 100644 --- a/docs/spring-cloud-pipelines.html +++ b/docs/spring-cloud-pipelines.html @@ -32,7 +32,7 @@ color: #ffffff; } - +