Monday, July 22, 2019

Deploy Nginx and HAproxy load balancer with SSL certificate



Summary :

Nginx is a high performance web server software. It is a much more flexible and lightweight program compare to Apache web Server. This article show how to install and configure nginx with external haproxy load balancer

Environment :

Server                IP            Role
nginx-node1           10.152.0.25   nginx
nginx-node1           10.152.0.26   nginx
nginx-loadbalancer    10.152.0.27   load balancer



Install and Configure nginx on Centos 7

To add the CentOS 7 EPEL repository, open terminal and use the following command:

# yum install epel-release














Install Nginx using the following yum command:

# yum install nginx














To get Nginx running, type:

# systemctl start nginx
# systemctl enable nginx







To allow HTTP and HTTPS traffic:

# firewall-cmd --permanent --zone=public --add-service=http
# firewall-cmd --permanent --zone=public --add-service=https
# firewall-cmd --reload












Verify nginx running on http using public ip:

 










Create an SSL Certificate on Nginx for CentOS 7

Note: This step not required if you are using HA Proxy configured with SSL certificate.

TLS, and its predecessor SSL are web protocols used to wrap normal traffic in a protected, encrypted wrapper. By enabling this, servers can send traffic safely between the server and the client without the concern that the messages will be intercepted and read by an external party. 

create a new directory to store our private key:

# mkdir /etc/nginx/ssl

Files must be kept strictly private, we will modify the permissions to make sure only the root user has access:

# chmod 700 /etc/nginx/ssl


Create the SSL key and certificate files with openssl:

# openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/ssl/nginx-selfsigned.key -out /etc/nginx/ssl/nginx-selfsigned.crt

  • openssl: This is the basic command line tool for creating and managing OpenSSL certificates, keys, and other files.
  • req -x509: This specifies that we want to use X.509 certificate signing request (CSR) management. The "X.509" is a public key infrastructure standard that SSL and TLS adhere to for key and certificate management.
  • -nodes: This tells OpenSSL to skip the option to secure our certificate with a passphrase. We need Apache to be able to read the file, without user intervention, when the server starts up. A passphrase would prevent this from happening, since we would have to enter it after every restart.
  • -days 365: This option sets the length of time that the certificate will be considered valid. We set it for one year here.
  • -newkey rsa:2048: This specifies that we want to generate a new certificate and a new key at the same time. We did not create the key that is required to sign the certificate in a previous step, so we need to create it along with the certificate. The rsa:2048 portion tells it to make an RSA key that is 2048 bits long.
  • -keyout: This line tells OpenSSL where to place the generated private key file that we are creating.
  • -out: This tells OpenSSL where to place the certificate that we are creating.















While we are using OpenSSL, we should also create a strong Diffie-Hellman group, which is used in negotiating Perfect Forward Secrecy with clients.

# openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048

# cat /etc/ssl/certs/dhparam.pem | sudo tee -a /etc/nginx/ssl/nginx-selfsigned.crt



Configure nginx.conf to use certificates:

 
# For more information on configuration, see:
#   * Official English Documentation: http://nginx.org/en/docs/
#   * Official Russian Documentation: http://nginx.org/ru/docs/

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    # Load modular configuration files from the /etc/nginx/conf.d directory.
    # See http://nginx.org/en/docs/ngx_core_module.html#include
    # for more information.
    include /etc/nginx/conf.d/*.conf;

    server {
        listen       80 default_server;
        listen       [::]:80 default_server;
        listen       443 ssl http2 default_server;
        listen       [::]:443 ssl http2 default_server;

        server_name  _;
        root         /usr/share/nginx/html;

        ssl_certificate "/etc/nginx/ssl/nginx-selfsigned.crt";
        ssl_certificate_key "/etc/nginx/ssl/nginx-selfsigned.key";

        ssl_session_cache shared:SSL:1m;
        ssl_session_timeout  10m;
        ssl_ciphers HIGH:!aNULL:!MD5;
        ssl_prefer_server_ciphers on;


        # Load configuration files for the default server block.
        include /etc/nginx/default.d/*.conf;

        location / {
        }

        error_page 404 /404.html;
            location = /40x.html {
        }

        error_page 500 502 503 504 /50x.html;
            location = /50x.html {
        }
    }

}



Check nginx.conf syntax:

nginx -t

Reload nginx service:

systemctl reload nginx













Verify nginx running on https using public ip:











Install HAProxy :

# yum install haproxy -y











Start haproxy service : 

# systemctl enable haproxy
# systemctl start haproxy






Allow firewall rule for haproxy :

firewall-cmd --permanent --zone=public --add-service=http
firewall-cmd --permanent --zone=public --add-service=https
firewall-cmd --permanent --zone=public --add-port=8181/tcp
firewall-cmd --reload

 









Add host entries on /etc/hosts :

# vim /etc/hosts
 
10.152.0.25 nginx-node1
10.152.0.26 nginx-node2


Create a new directory to store our private key :

# mkdir /etc/haproxy/ssl

Files must be kept strictly private, we will modify the permissions to make sure only the root user has access:

# chmod 700 /etc/haproxy/ssl


Create the SSL key and certificate files with openssl:

# openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/haproxy/ssl/haproxy-selfsigned.key -out /etc/haproxy/ssl/haproxy-selfsigned.crt

 
While we are using OpenSSL, we should also create a strong Diffie-Hellman group, which is used in negotiating Perfect Forward Secrecy with clients.

# openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048

# cat /etc/ssl/certs/dhparam.pem /etc/haproxy/ssl/haproxy-selfsigned.crt /etc/haproxy/ssl/haproxy-selfsigned.key > /etc/haproxy/ssl/haproxy-selfsigned-crt-key.pem


 
Configure haproxy.cfg :

# vim /etc/haproxy/haproxy.cfg

global
  log 127.0.0.1 local0
  maxconn 4000
  daemon
  uid 99
  gid 99

defaults
  log     global
  mode    http
  option  httplog
  option  dontlognull
  timeout server 5s
  timeout connect 5s
  timeout client 5s
  stats enable
  stats refresh 10s
  stats uri /haproxy?stats

frontend https_frontend
  bind *:80
  bind *:443 ssl crt /etc/haproxy/ssl/haproxy-selfsigned-crt-key.pem
  mode http
  option httpclose
  option forwardfor
  reqadd X-Forwarded-Proto:\ https
  default_backend web_server
 
backend web_server
  mode http
  balance roundrobin
  cookie SERVERID insert indirect nocache
  server ngnix-node1 10.152.0.25:80 check cookie nginx-node1
  server ngnix-node2 10.152.0.26:80 check cookie nginx-node2

 

Verify HA Proxy statistics :

 





 






Verify nginx running on HA Proxy load balancer via https using public ip:





Tuesday, July 16, 2019

Deploy Kubernetes cluster 1.11.3 on Centos 7 in Google Cloud Platform

Summary:


Kubernetes is a cluster and orchestration engine for Docker containers. In other words Kubernetes is an open source software or tool which is used to orchestrate and manage Docker containers in cluster environment. 


Kubernetes can be installed and deployed using following methods:

  • Minikube ( It is a single node kubernetes cluster)
  • Kops ( Multi node kubernetes setup into AWS )
  • Kubeadm ( Multi Node Cluster in our own premises)



Master Node components:


  • API Server  – It provides kubernetes API using Jason / Yaml over http, states of API objects are stored in etcd
  • Scheduler  – It is a program on master node which performs the scheduling tasks like launching containers in worker nodes based on resource availability
  • Controller Manager – Main Job of Controller manager is to monitor replication controllers and create pods to maintain desired state.
  • etcd – It is a Key value pair data base. It stores configuration data of cluster and cluster state.
  • Kubectl utility – It is a command line utility which connects to API Server on port 6443. It is used by administrators to create pods, services etc.


Worker Nodes components:


Kubelet – It is an agent which runs on every worker node, it connects to Docker  and takes care of creating, starting, deleting containers.

  • Kube-Proxy – It routes the traffic to appropriate containers based on IP address and port number of the incoming request. In other words we can say it is used for port translation.
  • Pod – Pod can be defined as a multi-tier or group of containers that are deployed on a single worker node or Docker host.

Installation steps of Kubernetes on CenOS 7


Environment:
Google Cloud Platform
Google Compute Engine ( not GKE )

On Master Node

Disable swap

# swapoff -a

Edit: /etc/fstab

# vi /etc/fstab

Comment out swap

#/root/swap swap swap sw 0 0

Add the Kubernetes repo

# cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF


!!!  Edit /etc/yum.repos.d/google-cloud.repo  and add exclude=kube* to avoid kubectl version update

Disable SELinux

# setenforce 0

Permanently disable SELinux:

# vi /etc/selinux/config

Change enforcing to disabled

SELINUX=disabled

Install Kubernetes 1.11.3 and docker


# yum install -y docker kubelet-1.11.3 kubeadm-1.11.3 kubectl-1.11.3 kubernetes-cni-0.6.0 --disableexcludes=kubernetes

Start and enable the Kubernetes and Docker service

# systemctl start docker && systemctl enable docker
# systemctl start kubelet && systemctl enable kubelet

Create the k8s.conf file:

# cat << EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF


# sysctl --system

# echo '1' > /proc/sys/net/ipv4/ip_forward



Disable firewall

# systemctl stop firewalld && systemctl disable firewalld

Create kube-config.yml:

# vi kube-config.yml

Add the following to kube-config.yml:

apiVersion: kubeadm.k8s.io/v1alpha1
kind:
kubernetesVersion: "v1.11.3"
networking:
  podSubnet: 10.244.0.0/16
apiServerExtraArgs:
  service-node-port-range: 8000-31274

Initialize Kubernetes

# kubeadm init --config kube-config.yml

Copy admin.conf to your home directory

# mkdir -p $HOME/.kube
# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# chown $(id -u):$(id -g) $HOME/.kube/config

!! Use below command to add worker  nodes !!
# kubeadm join 10.138.0.8:6443 --token 96iv27.yb7jsavab8rwqill --discovery-token-ca-cert-hash sha256:33a196539d423d30c416d46d71127537764c58f671ca08e2326386359ba614cb

Install flannel

# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml

Patch flannel

# vi /etc/kubernetes/manifests/kube-controller-manager.yaml

Add the following to kube-controller-manager.yaml:

--allocate-node-cidrs=true
--cluster-cidr=10.244.0.0/16

Then reolad kubelete

# systemctl restart kubelet




Verify status of cluster and pods:



kubectl get nodes

kubectl  get pods  --all-namespaces


 

On Worker Node


Disable swap

# swapoff -a

Edit: /etc/fstab

# vi /etc/fstab

Comment out swap

#/root/swap swap swap sw 0 0

Add the Kubernetes repo

# cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF

Disable SELinux

# setenforce 0

Permanently disable SELinux:

# vi /etc/selinux/config

Change enforcing to disabled

SELINUX=disabled

Install Kubernetes 1.11.3 and docker


# yum install -y docker kubelet-1.11.3 kubeadm-1.11.3 kubectl-1.11.3 kubernetes-cni-0.6.0 --disableexcludes=kubernetes

Start and enable the Kubernetes and Docker service

# systemctl start docker && systemctl enable docker
# systemctl start kubelet && systemctl enable kubelet

Create the k8s.conf file:

# cat << EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF


# sysctl --system

# echo '1' > /proc/sys/net/ipv4/ip_forward



Disable firewall

# systemctl stop firewalld && systemctl disable firewalld



Join workder nodes to master node:

kubeadm join < MASTER_IP >:6443 --token < TOKEN > --discovery-token-ca-cert-hash sha256:< HASH >



Verify Nodes status from master node:


kubectl get nodes




Conclusion:



Kubernetes 1.11.3 has been installed successfully and successfully joined two worker nodes.  Now we can create pods and services.