Use Kubernetes to deploy docker for load balance
Step 1: Prepare machine for Kubernetes
$ brew cask install minikube
$ brew install kubectl
$ minikube start
$ minikube status
Utilize the docker daemon within minikube
normally you build docker image, then push to Kubernetes but When using a single VM of Kubernetes, it’s really handy to reuse the Minikube’s built-in Docker daemon; as this means you don’t have to build a docker registry on your host machine and push the image into it - you can just build inside the same docker daemon as minikube which speeds up local experiments. Just make sure you tag your Docker image with something other than ‘latest’ and use that tag while you pull the image.
$ eval $(minikube docker-env)
$ kubectl config current-context
Step 2: Prepare Dockers
we need to define 4 dockers
1) Nginx: will serve static files
2) Django/Gunicorn : Gunicorn will serve python files
3) DB: mysql
4) Solr
2.1) Select Docker Base image
The python images come in many flavors, each designed for a specific use case.
2.1.a) python:<version> : It has a large number of extremely common Debian packages. This reduces the number of packages that images that derive from it need to install.
2.1.b) python:<version>-slim: only contains the minimal packages needed to run python
2.1.c) python:<version>-alpine: image based on Alpine images [size is ~5MB], Alpine use musl libc instead of glibc. so certain software might run into issues depending on the depth of their libc requirements. However, most software doesn't have an issue with this
2.2) Gunicorn configuration
Gunicorn should only need 4-12 worker processes to handle hundreds or thousands of requests per second. Generally we recommend
(2 x $num_cores) + 1
as the number of workers to start off with.
2.3) NGINX Configurationa)
2.3.a) worker_processes – The number of NGINX worker processes (the default is 1). In most cases, running one worker process per CPU core works well, and we recommend setting this directive to auto to achieve that. There are times when you may want to increase this number, such as when the worker processes have to do a lot of disk I/O.
Get number of cores available
2.3.a) worker_processes – The number of NGINX worker processes (the default is 1). In most cases, running one worker process per CPU core works well, and we recommend setting this directive to auto to achieve that. There are times when you may want to increase this number, such as when the worker processes have to do a lot of disk I/O.
Get number of cores available
grep processor /proc/cpuinfo | wc -l
2.3.b) worker_connections – The maximum number of connections that each worker process can handle simultaneously. The default is 512, but most systems have enough resources to support a larger number. The appropriate setting depends on the size of the server and the nature of the traffic, and can be discovered through testing.
Get connections that can simultaneously be served using the next command
Get connections that can simultaneously be served using the next command
ulimit -n
=======================
Prepare Dockers:
a) Prepare Django Docker
Create folder with name "django-docker" (mkdir django-docker) contains file "Dockerfile" (touch Dockerfile), and requirements.txt
Dockerfile File contents:
requirements.txt File contents:
Build docker
$ docker build -t django-docker:0.0.1 .
Run docker and expose port 8001
$ docker run -p 8001:8001 docker-django:0.0.1
Now user can view the django project by visit http://localhost:8001
================================
Automate build process using docker compose
1) Create a ymal file (touch docker-compose.yml)
a) Prepare Django Docker
Create folder with name "django-docker" (mkdir django-docker) contains file "Dockerfile" (touch Dockerfile), and requirements.txt
Dockerfile File contents:
FROM python:3.6.7-alpine ENV PYTHONUNBUFFERED 1 RUN mkdir /code WORKDIR /code ADD requirements.txt /code/ RUN pip install -r requirements.txt ADD ./ /code/ CMD ["python", "mysite/manage.py", "runserver", "0.0.0.0:8001"]
requirements.txt File contents:
beautifulsoup4==4.7.1
bs4==0.0.1
certifi==2019.3.9
chardet==3.0.4
Django==2.1.7
gunicorn==19.9.0
djangorestframework==3.9.2
html5lib==1.0.1
idna==2.8
lxml==4.3.2
numpy==1.16.2
pandas==0.24.2
Pillow==5.4.1
pymarc==3.1.12
PyMySQL==0.9.3
pysolr==3.8.1
python-dateutil==2.8.0
pytz==2018.9
requests==2.21.0
six==1.12.0
soupsieve==1.8
urllib3==1.24.1
webencodings==0.5.1
bs4==0.0.1
certifi==2019.3.9
chardet==3.0.4
Django==2.1.7
gunicorn==19.9.0
djangorestframework==3.9.2
html5lib==1.0.1
idna==2.8
lxml==4.3.2
numpy==1.16.2
pandas==0.24.2
Pillow==5.4.1
pymarc==3.1.12
PyMySQL==0.9.3
pysolr==3.8.1
python-dateutil==2.8.0
pytz==2018.9
requests==2.21.0
six==1.12.0
soupsieve==1.8
urllib3==1.24.1
webencodings==0.5.1
Build docker
$ docker build -t django-docker:0.0.1 .
Run docker and expose port 8001
$ docker run -p 8001:8001 docker-django:0.0.1
Now user can view the django project by visit http://localhost:8001
================================
Automate build process using docker compose
1) Create a ymal file (touch docker-compose.yml)
File contents:
version: '3'
services:
web:
build: ./django-docker/
command: python mysite/manage.py runserver 8001
ports:
- "8001:8001"
2) Remove CMD from ./django-docker/Dockerfile because ymal file do that now.
3) Build then Run the Docker
$ docker-compose build
$ docker-compose up -d
3.1) When you have two yaml files, you can choose the yaml for build and up
$ docker-compose -f production.yml up -d
3.2) List current active containers
$ docker-compose ps
3.3) To see which environment variables are available to the web service, run:
$ docker-compose run web env
3.4) To view the logs:
$ docker-compose logs
3.5) to stop docker
$ docker-compose down
4) to run bash commands against docker
$ docker-compose exec web bash
Common Django tasks:
$ docker-compose exec web python mysite/manage.py migrate
$ docker-compose exec web python mysite/manage.py createsuperuser
Note:
Docker content should be readonly and if we have a dynamic data we can map docker folder to a container folder to keep changes.
ie, in yaml file map /TargetDir directory to container /ContainerDir directory
volumes: - /TargetDir : /ContainerDir
b) Create DB docker
update ymal file and connect to web docker
db:
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: 'db'
MYSQL_USER: 'user'
MYSQL_PASSWORD: 'password'
MYSQL_ROOT_PASSWORD: 'password'
ports:
- '3306:3306'
expose:
- '3306'
volumes:
- my-db:/var/lib/mysql
networks:
- backend
networks:
backend:
driver: bridge
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: 'db'
MYSQL_USER: 'user'
MYSQL_PASSWORD: 'password'
MYSQL_ROOT_PASSWORD: 'password'
ports:
- '3306:3306'
expose:
- '3306'
volumes:
- my-db:/var/lib/mysql
networks:
- backend
networks:
backend:
driver: bridge
the ymal file contains passwords and it is better to keep out side ymal file, .env is the best place
create .env file in the same yaml directory
MYSQL_DATABASE =ddl
MYSQL_USER=user1
MYSQL_PASSWORD =root
MYSQL_ROOT_PASSWORD =root
DB_SERVICE=DB
MYSQL_USER=user1
MYSQL_PASSWORD =root
MYSQL_ROOT_PASSWORD =root
DB_SERVICE=DB
use ${Variable Name} to read any value from .env file inside yaml file
So, the ymal file can be
db:
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
ports:
- '3306:3306'
expose:
- '3306'
volumes:
- my-db:/var/lib/mysql
networks:
- backend
networks:
backend:
driver: bridge
c) Create nginx load balancer docker
Create folder with name "nginx" (mkdir nginx) contains file "Dockerfile" (touch Dockerfile)
Dockerfile contents
FROM nginx:latest
RUN rm /etc/nginx/sites-enabled/default
COPY default /etc/nginx/sites-enabled/default
RUN rm /etc/nginx/sites-enabled/default
COPY default /etc/nginx/sites-enabled/default
default file contents
server {
listen 80;
server_name localhost;
charset utf-8;
location /static {
alias /usr/src/app/static;
}
location / {
proxy_pass http://web:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
listen 80;
server_name localhost;
charset utf-8;
location /static {
alias /usr/src/app/static;
}
location / {
proxy_pass http://web:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
The Final ymal file
yaml file that configure django, DB, and nginx will look like
version: "3"
services:
web:
restart: always
build: ./django-docker/
expose:
- "8000"
ports:
- "8000:8000"
depends_on:
- db
networks:
- backend
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/static
env_file: .env
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn ddlDjango.wsgi:application -w 2 -b :8000
db:
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: 'db'
MYSQL_USER: 'user'
MYSQL_PASSWORD: 'password'
MYSQL_ROOT_PASSWORD: 'password'
ports:
- '3306:3306'
expose:
- '3306'
volumes:
- my-db:/var/lib/mysql
networks:
- backend
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- web-static:/www/static
networks:
- backend
networks:
backend:
driver: bridge
services:
web:
restart: always
build: ./django-docker/
expose:
- "8000"
ports:
- "8000:8000"
depends_on:
- db
networks:
- backend
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/static
env_file: .env
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn ddlDjango.wsgi:application -w 2 -b :8000
db:
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: 'db'
MYSQL_USER: 'user'
MYSQL_PASSWORD: 'password'
MYSQL_ROOT_PASSWORD: 'password'
ports:
- '3306:3306'
expose:
- '3306'
volumes:
- my-db:/var/lib/mysql
networks:
- backend
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- web-static:/www/static
networks:
- backend
networks:
backend:
driver: bridge
Notes:
1) network: When containers are placed in the same network, they are reachable by each other using their container name and other alias as host.
2) depends_on: expresses start order (and implicitly image pulling order)
3) ports: maps container port to the host.
4) EXPOSE: opens the port in the container, making it accessible by other containers. but not accessible to the host (does not actually publish the port)
5) env_file: .env read the contents of .env file and send to docker as an environmental variable
6) environment: send the given key/values pairs to docker as an environmental variable
setting.py dynamic line example :
import os
SECRET_KEY = os.environ.get('SECRET_KEY','sdfgtw54sdgsdfgsdfgsdfgsfd')
==========
Kubernetes based on dockers
To Deploy docker on Kubernetes, we need to convert existing Docker Compose yaml files into the necessary Kubernetes configuration.
which means created a Kubernetes deployment and a Kubernetes service for every container that had an exposed port.