Common helm chart for deploy images into kubernetes. Prerequirements 1. Clone the repo git clone https://gitlab.prozorro.sale/prozorro-sale/prozorro-deployment
2. Create k8s namespace Compatible k8s versions are :
v1.21.14 v1.22.17 v1.23.15 v1.24.9 NS = prozorro-sale
kubectl create namespace $NS
3. Add rg-stable repo to helm helm repo add rg-stable https://helm.prozorro.sale
4. Update dependency helm dependency update ./helm/prozorro-deployment
5. Set up credentials of private registry to your namespace Helm chart automatically provides secret for registries If you do not use CI/CD gitlab.prozorro.sale or you need to connect other registries you can define these variables in values
registry_credentials :
- registry : registry.other.com
username : " user.other.com"
password : " password.other.com"
6. Set up ingress controller if you need it. It is possible to install the same ingress controller as used in Denovo (clause 6.1), or a suitable alternative (clause 6.2)
6.1 Ingress controller used in Denovo For install pleas read denovo ingress-nginx version
Clone to another directory
git clone -b nginx-0.28.0 git@github.com:kubernetes/ingress-nginx.git
cd ingress-nginx/deploy/static/
kubectl apply -f mandatory.yaml
kubectl apply -f provider/baremetal/service-nodeport.yaml
ingress controller version
sudo kubectl exec -it -n ingress-nginx pod/nginx-ingress-controller-5f4b8fc989-6xpqn -- /nginx-ingress-controller --version
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: 0.28.0
Build: git-1f93cb8f3
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.17.7
-------------------------------------------------------------------------------
namespace/ingress-nginx configured
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created
limitrange/ingress-nginx created
kubectl -n ingress-nginx get all
NAME READY STATUS RESTARTS AGE
pod/nginx-ingress-controller-cff87d767-4mbrl 1/1 Running 0 23s
pod/nginx-ingress-controller-cff87d767-8rlwc 1/1 Running 0 3m53s
pod/nginx-ingress-controller-cff87d767-dkvnc 1/1 Running 0 23s
pod/nginx-ingress-controller-cff87d767-jd6v5 1/1 Running 0 23s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-ingress-controller 4/4 4 4 3m53s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-ingress-controller-cff87d767 4 4 4 3m53s
Additional info:
6.2 Alternative Ingress controller Clone to another directory
Kube version 1.22-1.25
helm install ingress-nginx ingress-nginx/ingress-nginx --version 4.3.0 --set controller.kind=DaemonSet -n ingress-nginx-testing
git clone -b v1.5.0 https://github.com/nginxinc/kubernetes-ingress/
cd kubernetes-ingress/deployments/helm-chart
helm install --name rg-gw . --set rbac.create= true --set controller.hostNetwork= true --set controller.kind= daemonset --namespace ingress-nginx
Additional info:
7. Create bucket secrets for document-service (necessary for S3 storage type) bucket-secret
- s3 bucket env variables
BUCKET_ACCESS_KEY=
BUCKET_SECRET_KEY=
BUCKET_HOST=
BUCKET_NAME=
STORAGE_NAME=
kubectl -n $NS create secret generic bucket-secret --from-env-file = <path to env file>
Additional info: Document service
For local development use memory
storage type
8. Create secrets with api keys for services Create private and public keys is a pair of RS256 signature:
Filenames should be api-key
and api-key.pub
openssl genpkey -out api-key -algorithm rsa
openssl rsa -in api-key -outform PEM -pubout -out api-key.pub
Create such secrets:
procedure-api-keys
(procedure)registry-api-keys
(registry)jobber-api-keys
(jobber)marketplace-api-keys
(marketplace)auth-api-keys
(auth)kubectl -n $NS create secret generic <secret name> --from-file = <path to private key [ api-key]> --from-file = <path to public key [ api-key.pub]>
Create private and public keys is a pair of RS256 signature for document service:
Filenames should be ds-key
and ds-key.pub
openssl genpkey -out ds-key -algorithm rsa
openssl rsa -in ds-key -outform PEM -pubout -out ds-key.pub
kubectl -n $NS create secret generic document-service-keys --from-file = <path to private key [ ds-key]> --from-file = <path to public key [ ds-key.pub]>
9. Date config with holidays and working days kubectl -n $NS create configmap date-config --from-file values/date-config.yml
10. Auth files To find more information about auth files structure check auth README
Example of auth file:
brokers :
broker_name :
token : <hash>
broker_info :
legal_name :
uk_UA : broker_legal_name
en_US : broker_legal_name
permissions :
procedures :
renewables :
- procedure
- bids
timber : [ ]
subsoil :
- procedure
- bids
railwayCargo :
- procedure
- bids
dgf : [ ]
registry :
object :
- " *"
Create such secrets:
auth-file
(procedure)registry-auth-file
(registry)jobber-auth-file
(jobber)marketplace-auth-file
(marketplace)relocation-auth-file
(relocation)survey-auth-file
(survey)billing-auth-file
(billing)kubectl -n $NS create secret generic <secret name> --from-file = <path to auth.yml file>
Create token for mirror client. Token must be defined in auth file (see point 10). To create token
use auth CLI
Create such secrets:
service-auth-token
(procedure)registry-service-auth-token
(registry)announcement-service-auth-token
(jobber)kubectl -n $NS create secret generic <secret name> --from-literal = TOKEN = <token>
12. Credentials file for registry service importer Generate credentials file for google drive and google spreadsheets by making a project in https://console.cloud.google.com .
To create cred file:
kubectl -n $NS create secret generic gapi-creds-file --from-file = <path to creds.json file>
14. For installing local or latest images helm package --app-version = <app_version> --version = <version> ./helm/prozorro-deployment
helm upgrade -i --namespace = <namespace> <realese_name> prozorro-helm-* .tgz -f specs/base-procedure-spec.yaml -f specs/custom-procedure-spec.yaml -f values/local-values.yaml
More examples you can find in
Makefile
GIT_STAMP ?= $(shell git describe)
SECRET ?= values/secret-dummy.yaml
HELM_CI_ENV_VAR ?= values/ci-env-dummy.yaml
HELM_ENV_FILES = -f $(SECRET) -f $(HELM_CI_ENV_VAR)
DEV_SPACES = DEV EPIC DEMO
PROD_SPACES = SANDBOX STAGE PROD SANDBOX_DGF PROD_DGF
KUBEVAL_SCHEMA_LOCATION ?= https://gitlab.prozorro.sale/prozorro-sale/kubernetes-json-schema/-/raw/master/
CHART_PATH ?= ./helm/prozorro-deployment
CHART_MUSEUM_URL ?= https://helm.prozorro.sale
CHART_MUSEUM_USER ?= ""
CHART_MUSEUM_PASS ?= ""
DEV_VALUES ?= -f values/non_prod/prozorro-dev/values.yaml -f specs/dgf-base-procedure-spec.yaml -f specs/dgf-custom-procedure-spec.yaml -f specs/base-procedure-spec.yaml -f specs/custom-procedure-spec.yaml
DEV_NAMESPACE ?= prozorro-dev
DEV_RELEASE_NAME ?= prozorro-dev
DEV_KUBERNETES_VERSION ?= 1.29.4
EPIC_VALUES ?= -f values/non_prod/prozorro-epic/values.yaml -f specs/base-procedure-spec.yaml -f specs/custom-procedure-spec.yaml -f specs/dgf-custom-procedure-spec.yaml -f specs/dgf-base-procedure-spec.yaml -f specs/epic-procedure-spec.yaml
EPIC_NAMESPACE ?= prozorro-epic
EPIC_RELEASE_NAME ?= prozorro-epic
EPIC_KUBERNETES_VERSION ?= 1.29.4
DEMO_VALUES ?= -f values/non_prod/prozorro-demo/values.yaml -f specs/base-procedure-spec.yaml -f specs/custom-procedure-spec.yaml -f specs/dgf-custom-procedure-spec.yaml -f specs/dgf-base-procedure-spec.yaml -f specs/epic-procedure-spec.yaml
DEMO_NAMESPACE ?= prozorro-demo
DEMO_RELEASE_NAME ?= prozorro-demo
DEMO_KUBERNETES_VERSION ?= 1.29.4
SANDBOX_VALUES ?= -f values/non_prod/prozorro-sandbox/values.yaml -f specs/base-procedure-spec.yaml -f specs/custom-procedure-spec.yaml -f specs/dgf-custom-procedure-spec.yaml -f specs/dgf-base-procedure-spec.yaml -f image-versions.yaml
SANDBOX_NAMESPACE ?= prozorro-sandbox
SANDBOX_RELEASE_NAME ?= prozorro-sandbox
SANDBOX_KUBERNETES_VERSION ?= 1.29.4
STAGE_VALUES ?= -f values/non_prod/prozorro-staging/values.yaml -f specs/dgf-custom-procedure-spec.yaml -f specs/dgf-base-procedure-spec.yaml -f specs/base-procedure-spec.yaml -f specs/custom-procedure-spec.yaml -f image-versions.yaml
STAGE_NAMESPACE ?= prozorro-staging
STAGE_RELEASE_NAME ?= prozorro-staging
STAGE_KUBERNETES_VERSION ?= 1.29.4
PROD_VALUES ?= -f values/prod/prozorro-prod/values.yaml -f specs/base-procedure-spec.yaml -f image-versions.yaml
PROD_NAMESPACE ?= prozorro-prod
PROD_RELEASE_NAME ?= prozorro-prod
PROD_KUBERNETES_VERSION ?= 1.29.4
SANDBOX_DGF_VALUES ?= -f values/non_prod/prozorro-sandbox-dgf/values.yaml -f specs/dgf-custom-procedure-spec.yaml -f specs/dgf-base-procedure-spec.yaml -f image-versions.yaml
SANDBOX_DGF_NAMESPACE ?= prozorro-sandbox-dgf
SANDBOX_DGF_RELEASE_NAME ?= prozorro-sandbox-dgf
SANDBOX_DGF_KUBERNETES_VERSION ?= 1.29.4
PROD_DGF_VALUES ?= -f values/prod/prozorro-sale-dgf/values.yaml -f specs/dgf-base-procedure-spec.yaml -f image-versions.yaml
PROD_DGF_NAMESPACE ?= prozorro-sale-dgf
PROD_DGF_RELEASE_NAME ?= prozorro-sale-dgf
PROD_DGF_KUBERNETES_VERSION ?= 1.29.4
LOCAL_VALUES ?= -f values/non_prod/local/values.yaml $(HELM_ENV_FILES)
AWS_DEV_VALUES ?= $(DEV_VALUES) -f values/non_prod/prozorro-dev/values-aws.yaml $(HELM_ENV_FILES)
AWS_EPIC_VALUES ?= $(EPIC_VALUES) -f values/non_prod/prozorro-epic/values-aws.yaml $(HELM_ENV_FILES)
AWS_DEMO_VALUES ?= $(DEMO_VALUES) -f values/non_prod/prozorro-demo/values-aws.yaml $(HELM_ENV_FILES)
AWS_STAGE_VALUES ?= $(STAGE_VALUES) -f values/non_prod/prozorro-staging/values-aws.yaml $(HELM_ENV_FILES)
AWS_SANDBOX_VALUES ?= $(SANDBOX_VALUES) -f values/non_prod/prozorro-sandbox/values-aws.yaml $(HELM_ENV_FILES)
AWS_SANDBOX_DGF_VALUES ?= $(SANDBOX_DGF_VALUES) -f values/non_prod/prozorro-sandbox-dgf/values-aws.yaml $(HELM_ENV_FILES)
AWS_PROD_VALUES ?= $(PROD_VALUES) -f values/prod/prozorro-prod/values-aws.yaml $(HELM_ENV_FILES)
AWS_PROD_DGF_VALUES ?= $(PROD_DGF_VALUES) -f values/prod/prozorro-sale-dgf/values-aws.yaml $(HELM_ENV_FILES)
GREEN = $(shell tput -Txterm setaf 2)
YELLOW = $(shell tput -Txterm setaf 3)
WHITE = $(shell tput -Txterm setaf 7)
RESET = $(shell tput -Txterm sgr0)
GRAY = $(shell tput -Txterm setaf 6)
TARGET_MAX_CHAR_NUM = 20
.EXPORT_ALL_VARIABLES:
all: help
version:
$(eval GIT_TAG ?= $(shell git describe --abbrev=0) )
$(eval VERSION ?= $(shell read -p "Version: " VERSION; echo $$VERSION) )
echo "## prozorro-deployment was updated from $(GIT_TAG) to $(VERSION)\n ### Tagged release $(VERSION)\n " > Changelog- $(VERSION) .txt
git log --oneline --no-decorate --no-merges $(GIT_TAG) ..HEAD | sed 's/^/ /' >> Changelog- $(VERSION) .txt
python create_changelogs.py -v $(VERSION)
git tag --cleanup=verbatim -a -e -F Changelog- $(VERSION) .txt $(VERSION)
validate-helm-lint: helm-preparing
@ $(foreach var,$(PROD_SPACES), echo "\n\n======= Check $(var) =======\n\n" \
&& helm3 lint --namespace=$( $(var) _NAMESPACE) $(CHART_PATH) $( $(var) _VALUES) || exit;)
validate-helm-lint-dev:
make helm-versions-update ARGS= "--env=$(ENV_NAME) "
make helm-preparing
@echo "\n\n======= Check $(ENV_NAME) =======\n\n" \
&& helm3 lint --namespace=$( $(ENV_NAME) _NAMESPACE) $(CHART_PATH) $( $(ENV_NAME) _VALUES)
validate-helm-charts-kubeval: helm-preparing
@ $(foreach var,$(PROD_SPACES), echo "\n\n======= Check $(var) on kubernetes-version $($(var)_KUBERNETES_VERSION) =======\n\n " \
&& helm3 kubeval --exit-on-error --kubernetes-version $( $(var) _KUBERNETES_VERSION) \
--strict --name-template=$( $(var) _RELEASE_NAME) --namespace=$( $(var) _NAMESPACE) $( $(var) _VALUES) \
$(CHART_PATH) || exit;)
## Check charts with kubeval for dev
validate-helm-charts-kubeval-dev:
make helm-versions-update ARGS=" --env= $(ENV_NAME) "
make helm-preparing
@echo " \n\n======= Check $(ENV_NAME) on kubernetes-version $( $(ENV_NAME) _KUBERNETES_VERSION) =======\n\n " \
&& helm3 kubeval --exit-on-error --kubernetes-version $( $(ENV_NAME) _KUBERNETES_VERSION) \
--strict --name-template=$( $(ENV_NAME) _RELEASE_NAME) --namespace=$( $(ENV_NAME) _NAMESPACE) $( $(ENV_NAME) _VALUES) \
$(CHART_PATH)
## Deploy DEV | Deploy
deploy-dev: helm-package
helm3 upgrade -i --namespace= $(DEV_NAMESPACE) $(DEV_RELEASE_NAME) ./prozorro-helm-chart- $(GIT_STAMP) .tgz $(AWS_DEV_VALUES)
## Deploy EPIC
deploy-epic: helm-package
helm3 upgrade -i --namespace= $(EPIC_NAMESPACE) $(EPIC_RELEASE_NAME) ./prozorro-helm-chart- $(GIT_STAMP) .tgz $(AWS_EPIC_VALUES)
## Deploy DEMO
deploy-demo: helm-package
helm3 upgrade -i --namespace= $(DEMO_NAMESPACE) $(DEMO_RELEASE_NAME) ./prozorro-helm-chart- $(GIT_STAMP) .tgz $(AWS_DEMO_VALUES)
## Deploy SANDBOX
deploy-sandbox: helm-package
helm3 upgrade -i --namespace= $(SANDBOX_NAMESPACE) $(SANDBOX_RELEASE_NAME) ./prozorro-helm-chart- $(GIT_STAMP) .tgz $(AWS_SANDBOX_VALUES)
## Deploy STAGE
deploy-stage: helm-package
helm3 upgrade -i --namespace= $(STAGE_NAMESPACE) $(STAGE_RELEASE_NAME) ./prozorro-helm-chart- $(GIT_STAMP) .tgz $(AWS_STAGE_VALUES)
## Deploy LOCAL MINIKUBE
deploy-local: helm-package
helm3 upgrade -i --namespace=prozorro-sale prozorro-local ./prozorro-helm-chart- $(GIT_STAMP) .tgz $(DEV_VALUES) $(LOCAL_VALUES)
## Deploy PROD
deploy-prod: helm-package
helm3 upgrade -i --namespace= $(PROD_NAMESPACE) $(PROD_RELEASE_NAME) ./prozorro-helm-chart- $(GIT_STAMP) .tgz $(AWS_PROD_VALUES)
## Deploy SANDBOX DGF
deploy-sandbox-dgf: helm-package
helm3 upgrade -i --namespace= $(SANDBOX_DGF_NAMESPACE) $(SANDBOX_DGF_RELEASE_NAME) ./prozorro-helm-chart- $(GIT_STAMP) .tgz $(AWS_SANDBOX_DGF_VALUES)
## Deploy PROD DGF
deploy-prod-dgf: helm-package
helm3 upgrade -i --namespace= $(PROD_DGF_NAMESPACE) $(PROD_DGF_RELEASE_NAME) ./prozorro-helm-chart- $(GIT_STAMP) .tgz $(AWS_PROD_DGF_VALUES)
## Deploy rollout restart deployment | HELPERS
deploy-rollout-restart:
@kubectl -n $(NAMESPACE) rollout restart deployment
@sleep 10
@helm3 -n $(NAMESPACE) status $(NAMESPACE)
## Update helm charts dependency
helm-preparing:
helm3 dependency update $(CHART_PATH)
helm3 dependency list $(CHART_PATH)
ls -l $(CHART_PATH) /charts
helm-package:
helm3 package --app-version= $(GIT_STAMP) --version= $(GIT_STAMP) $(CHART_PATH)
## Update helm charts version for other environment
helm-versions-update:
python3 update_helm_versions.py $(ARGS)
mongo-dump-data-and-rs:
$(eval OPLOG_DB ?= $(shell read -p " Host and port of oplog db: " OPLOG_DB; echo $$OPLOG_DB))
$(eval DATA_DB ?= $(shell read -p " Host and port of data db: " DATA_DB; echo $$DATA_DB))
mongodump --host=" $(OPLOG_DB) "-d local -c oplog.rs -o oplog_dump/
mongodump --oplog --host=" $(DATA_DB) " -o db_dump/
tmp-mongo-restore-and-apply-rs:
$(eval TIME ?= $(shell read -p " Time: " TIME; echo $$TIME))
docker run -p 27017:27017 --name=tmp-mongo -d mongo --replSet rs0
sleep 2
docker run --network=" host " --rm mongo mongo --host 127.0.0.1:27019 --eval " rs.initiate() "
sleep 2
mongorestore -vvvv --objcheck --host=" 127.0.0.1:27019 " --dir=db_dump/
sleep 2
mongorestore -vvvv --host=" 127.0.0.1:27019 " --dir=oplog_dump/ --oplogReplay --oplogLimit $(TIME)
mongo-dump-zip:
mongodump --host=" 127.0.0.1:27019 " -o final_dump/ --gzip
docker rm -f tmp-mongo
check-versions:
python3 check_services_to_update.py
## Delete previous release job
pre-deploy-job-cleaner:
kubectl --namespace= $(NAMESPACE) get jobs -o custom-columns=:.metadata.name | grep -i -E 'elastic-index|reindex-elastic|create-index|reindex-data|apply-migrations|survey-create-cache-table' | xargs --no-run-if-empty kubectl --namespace= $(NAMESPACE) delete jobs
## Validate auth files
TEST_ENV ?= dev epic sandbox staging prod sandbox-dgf sale-dgf
validate-auth-schema:
@$(foreach var, $(TEST_ENV) , echo " \n======= Check $(var) =======\n " \
&& python3 validate_auth_schema.py --env $(var) || exit;)
## Validate release schedule
RELEASE_ENV ?= prod
validate-release-schedule:
python3 check_release_schedule.py $(RELEASE_TAG) --env $(RELEASE_ENV)
## CI Cleanup job | Maintenance jobs
ci-job-cleaner:
@docker info
@echo ''
@echo '${GRAY}CI docker artifacts:${RESET}'
@echo '${WHITE}CI docker disk usage artifacts:${RESET}'
@docker system df
@echo ''
@echo '${WHITE}CI docker containers artifacts:${RESET}'
@docker ps -as
@echo ''
@echo '${WHITE}CI docker network artifacts:${RESET}'
@docker network ls
@echo ''
@echo '${WHITE}CI docker image artifacts:${RESET}'
@docker image ls -a
@echo ''
@echo ''
@echo '${GRAY}Prune CI docker artifacts:${RESET}'
@docker ps -a -q | xargs --no-run-if-empty docker stop || exit 0
@docker ps -a -q | xargs --no-run-if-empty docker rm -v -f || exit 0
@docker rmi -f $(docker images -q | uniq) || exit 0
@docker system prune -a -f --volumes
## Shows help. | Help
help:
@echo ''
@echo 'Usage:'
@echo ''
@echo ' ${YELLOW}make${RESET} ${GREEN}<target>${RESET}'
@echo ''
@echo 'Targets:'
@awk '/^[a-zA-Z \- _]+:/ { \
helpMessage = match(lastLine, /^## (.*)/); \
if (helpMessage) { \
if (index(lastLine, " | ") != 0) { \
stage = substr(lastLine, index(lastLine, " | ") + 1); \
printf " \n ${GRAY}%s: \n\n ", stage; \
} \
helpCommand = substr($$1, 0, index($$1, " : ")-1); \
helpMessage = substr(lastLine, RSTART + 3, RLENGTH); \
if (index(lastLine, " | ") != 0) { \
helpMessage = substr(helpMessage, 0, index(helpMessage, " | ")-1); \
} \
printf " ${YELLOW}%- $(TARGET_MAX_CHAR_NUM) s${RESET} ${GREEN}%s${RESET}\n ", helpCommand, helpMessage; \
} \
} \
{ lastLine = $$0 }' $(MAKEFILE_LIST)
@echo ''
Swift storage If you need to deploy document service with swift file storage, needs to create kubernetes services swift-auth
and swift-storage
, with defined auth and storage ip's.