(Development is WIP)
- Golang (Go) REST Http Service requests with json for Messages
- TLS for all requests
- Integration and Unit tests; run in parallel using dockertest for faster feedback
- Coverage Result for key packages
- Postgres DB health check service
- User Management service with Postgres for user creation
- JWT generation for Authentication
- JWT Authentication for Interest Calculations
- 30daysInterest for a deposit is called Delta
- Delta is for
- each deposit
- each bank with all deposits
- all banks!
- Sanity test client included for settings for each deployment
- Dockering and using it for both Docker Compose and Kubernetes
- Docker compose deployment for development
- Kuberenets Deployment with Ingress; Helm
- Running from Editor/IDE directly included
- Tracing enabled using Zipkin for Observability
export DEPOSITS_REST_SERVICE_ADDRESS=localhost
docker build -t tlscert-rest:v0.1 -f ./build/Dockerfile.openssl ./conf/tls && \
docker run --env DEPOSITS_REST_SERVICE_ADDRESS=$DEPOSITS_REST_SERVICE_ADDRESS -v $PWD/conf/tls:/tls tlscert-rest:v0.1export COMPOSE_IGNORE_ORPHANS=True && \
docker-compose -f ./deploy/compose/docker-compose.external-db-trace-only.yml up export COMPOSE_IGNORE_ORPHANS=True && \
docker-compose -f ./deploy/compose/docker-compose.seed.yml up --buildMake sure DEPOSITS_REST_SERVICE_TLS=false in docker-compose.rest.server.yml
Make sure DEPOSITS_REST_SERVICE_TLS=true in docker-compose.rest.server.yml
export COMPOSE_IGNORE_ORPHANS=True && \
docker-compose -f ./deploy/compose/docker-compose.rest.server.yml up --buildThe --build option is there for any code changes.
COMPOSE_IGNORE_ORPHANS is there for docker compose setting.
docker-compose -f ./deploy/compose/docker-compose.rest.server.yml logs -f --tail 1
The server side DEPOSITS_REST_SERVICE_TLS should be consistent and set for client also.
export GODEBUG=x509ignoreCN=0
export DEPOSITS_REST_SERVICE_TLS=true
export DEPOSITS_REST_SERVICE_ADDRESS=localhost
go run ./cmd/sanitytestclientAccess zipkin service at http://localhost:9411/zipkin/
http://localhost:4000/debug/pprof/
http://localhost:4000/debug/vars
docker-compose -f ./deploy/compose/docker-compose.external-db-trace-only.yml down
docker-compose -f ./deploy/compose/docker-compose.rest.server.yml downRun at terminal:
docker build -f ./build/Dockerfile.calculate -t illumcalculate . && \
docker run illumcalculate export DEPOSITS_REST_SERVICE_ADDRESS=localhost
docker build -t tlscert-rest:v0.1 -f ./build/Dockerfile.openssl ./conf/tls && \
docker run --env DEPOSITS_REST_SERVICE_ADDRESS=$DEPOSITS_REST_SERVICE_ADDRESS -v $PWD/conf/tls:/tls tlscert-rest:v0.1To start only external db and trace service for working with local machine:
Start postgres and tracing as usual
export COMPOSE_IGNORE_ORPHANS=True && \
docker-compose -f ./deploy/compose/docker-compose.external-db-trace-only.yml upexport COMPOSE_IGNORE_ORPHANS=True && \
docker-compose -f ./deploy/compose/docker-compose.seed.yml up --buildThen Set the following env variables when starting directly running server: change as needed And per your Editor/IDE:
export DEPOSITS_REST_SERVICE_TLS=true
export DEPOSITS_DB_DISABLE_TLS=true
export DEPOSITS_DB_HOST=127.0.0.1
export DEPOSITS_TRACE_URL=http://127.0.0.1:9411/api/v2/spans
go run ./cmd/serverThe server side DEPOSITS_REST_SERVICE_TLS should be consistent and set for client also.
export GODEBUG=x509ignoreCN=0
export DEPOSITS_REST_SERVICE_TLS=true
export DEPOSITS_REST_SERVICE_ADDRESS=localhost
go run ./cmd/sanitytestclient(for Better control; For Local Setup tested with Docker Desktop latest version with Kubernetes Enabled)
export DEPOSITS_REST_SERVICE_ADDRESS=restserversvc.127.0.0.1.nip.io
docker build -t tlscert:v0.1 -f ./build/Dockerfile.openssl ./conf/tls && \
docker run --env DEPOSITS_REST_SERVICE_ADDRESS=$DEPOSITS_REST_SERVICE_ADDRESS -v $PWD/conf/tls:/tls tlscert:v0.1As a side note, For any troubleshooting, To see openssl version being used in Docker:
docker build -t tlscert:v0.1 -f ./build/Dockerfile.openssl ./conf/tls && \
docker run -ti -v $PWD/conf/tls:/tls tlscert:v0.1 shYou get a prompt /tls Check version using command:
openssl versionUsing helm to install nginx ingress controller
brew install helm
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm repo listand then use
helm install ingress-nginx -f ./deploy/kubernetes/nginx-ingress-controller/helm-values.yaml ingress-nginx/ingress-nginxto install ingress controller To see logs for nginx ingress controller:
kubectl logs -l app.kubernetes.io/name=ingress-nginx -fdocker build -t rsachdeva/illuminatingdeposits.rest.server:v1.4.0 -f ./build/Dockerfile.rest.server .
docker build -t rsachdeva/illuminatingdeposits.seed:v1.4.0 -f ./build/Dockerfile.seed .
docker push rsachdeva/illuminatingdeposits.rest.server:v1.4.0
docker push rsachdeva/illuminatingdeposits.seed:v1.4.0We only need to set secrets once after tls files have been generated
kubectl delete secret illuminatingdeposits-rest-secret-tls
kubectl create --dry-run=client secret tls illuminatingdeposits-rest-secret-tls --key conf/tls/serverkeyto.pem --cert conf/tls/servercrtto.pem -o yaml > ./deploy/kubernetes/tls-secret-ingress.yamlkubectl apply -f deploy/kubernetes/.If status for kubectl get pod -l job-name=seed | grep "Completed"
shows completed for seed pod, optionally can be deleted:
kubectl delete -f deploy/kubernetes/seed.yamlAllows connecting Postgres UI using NodePort at 30007 from outside cluster locally to view data.
The server side DEPOSITS_REST_SERVICE_TLS should be consistent and set for client also.
The DEPOSITS_REST_SERVICE_TLS for client is true when Ingress is used with tls.
export DEPOSITS_REST_SERVICE_TLS=true
export GODEBUG=x509ignoreCN=0
export DEPOSITS_REST_SERVICE_TLS=true
export DEPOSITS_REST_SERVICE_ADDRESS=restserversvc.127.0.0.1.nip.io
go run ./cmd/sanitytestclientWith this Sanity test client, you will be able to:
- get status of Prostres DB
- add a new user
- JWT generation for Authentication
- JWT Authentication for Interest Delta Calculations for each deposit; each bank with all deposits and all banks Quickly confirms Sanity check for set up with Kubernetes/Docker. There are also separate Integration and Unit tests.
Access zipkin service at http://zipkin.127.0.0.1.nip.io
kubectl apply -f deploy/kubernetes/postgres-env.yaml
kubectl apply -f deploy/kubernetes/postgres.yamlFirst should see in logs
database system is ready to accept connections
kubectl logs pod/postgres-deposits-0
And then execute migration/seed data for manual control when getting started:
kubectl apply -f deploy/kubernetes/seed.yamlAnd if status for kubectl get pod -l job-name=seed | grep "Completed"
shows completed for seed pod, optionally can be deleted:
kubectl delete -f deploy/kubernetes/seed.yamlTo connect external tool with postgres to see database internals use: Use a connection string similar to: jdbc:postgresql://127.0.0.1:30007/postgres If still an issue you can try kubectl port-forward service/postgres 5432:postgres Now can easily connect using jdbc:postgresql://localhost:5432/postgres
kubectl apply -f deploy/kubernetes/zipkin.yamlAccess zipkin service at http://zipkin.127.0.0.1.nip.io Sort Newest First and Click Find Traces
kubectl delete secret illuminatingdeposits-rest-secret-tls
kubectl create --dry-run=client secret tls illuminatingdeposits-rest-secret-tls --key conf/tls/serverkeyto.pem --cert conf/tls/servercrtto.pem -o yaml > ./deploy/kubernetes/tls-secret-ingress.yaml
kubectl apply -f deploy/kubernetes/tls-secret-ingress.yamlkubectl apply -f deploy/kubernetes/rest-server.yamlAnd see logs using
kubectl logs -l app=restserversvc -f
kubectl delete -f ./deploy/kubernetes/.
helm uninstall ingress-nginxTests are designed to run in parallel with its own test server and docker based postgres db using dockertest. To run all tests with coverages reports for focussed packages: Run following only once as tests use this image; so faster:
docker pull postgres:13-alpinego test -v -count=1 -covermode=count -coverpkg=./postgreshealth/... -coverprofile cover.out ./postgreshealth -run TestServiceServer_HealthOk && go tool cover -func cover.out And then run the following with coverages for key packages concerned:
go test -v -count=1 -covermode=count -coverpkg=./userauthn/...,./usermgmt/...,./postgreshealth/...,./interestcal/... -coverprofile cover.out ./... && go tool cover -func cover.out
go test -v -count=1 -covermode=count -coverpkg=./userauthn/...,./usermgmt/...,./postgreshealth/...,./interestcal/... -coverprofile cover.out ./... && go tool cover -html cover.outCoverage Result for key packages:
total: (statements) 96.3%
To run a single test - no coverage:
go test -v -count=1 -run=TestServiceServer_CreateUser ./usermgmt/...To run a single test - with coverage:
go test -v -count=1 -covermode=count -coverpkg=./usermgmt -coverprofile cover.out -run=TestServiceServer_CreateUser ./usermgmt/... && go tool cover -func cover.outThe -v is for Verbose output: log all tests as they are run. Search "FAIL:" in parallel test output here to see reason for failure in case any test fails. Just to run all easily with verbose ouput:
go test -v ./... The -count=1 is mainly to not use caching and can be added as follows if needed for any go test command:
go test -v -count=1 ./...See Editor specifcs to see Covered Parts in the Editor.
Docker containers are mostly auto removed. This is done by passing true to testserver.InitRestServer(ctx, t, false) in your test. If you want to examine postgresdb data for a particular test, you can temporarily set allowPurge as false in testserver.InitRestHttpServer(ctx, t, false) for your test. Then after running specific failed test connect to postgres db in the docker container using any db ui. As an example, if you want coverage on a specific package and run a single test in a package with verbose output:
go test -v -count=1 -covermode=count -coverpkg=./usermgmt -coverprofile cover.out -run=TestServiceServer_CreateUser ./usermgmt/... && go tool cover -func cover.outAny docker containers still running after tests should be manually removed:
docker ps
docker stop $(docker ps -qa)
docker rm -f $(docker ps -qa)
docker volume rm $(docker volume ls -qf dangling=true)If for any reason no connection is happening from client to server or client hangs or server start up issues: Run
ps aux | grep "go run"
ps aux | grep "go_build"
to confirm is something else is already running Make sure to follow above TLS set up according to Kubernetes deployment, Docker compose deployment or Running from Editor. Make sure to follow Ingress controller installation for Kubernetes deployment.
v1.4.0

