Multi-node Kubernetes on CentOS 7.x with Flannel

Mode: Multi-node setup, with Flannel, using Kismatic repo.

— DEPRECATED (may work; not tested lately) —

This is the common script (kube-base.txt) that we run on every machine. Please tweak it for your environment (mainly IPs) and drop it on some web location.

# file: kube-base.txt
# add docker repo and install docker
cat > /etc/yum.repos.d/docker.repo << '__EOF__'
[docker]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
__EOF__
 
yum install docker-engine -y
 
mkdir -p /etc/systemd/system/docker.service.d 
 
cat > /etc/systemd/system/docker.service.d/override.conf << '__EOF__'
[Service] 
ExecStart= 
ExecStart=/usr/bin/docker daemon --storage-driver=overlay $DOCKER_NETWORK_OPTIONS
__EOF__
 
systemctl daemon-reload
systemctl enable docker
 
# we'll start docker only after flannel is up and running
# systemctl start docker
 
tee -a /etc/hosts << '__EOF__'
 
192.168.1.150 kube-master
192.168.1.151 kube-node-01
192.168.1.152 kube-node-02
192.168.1.153 kube-node-03
__EOF__

=== MASTER PREP ===

# tweak it as per your setup
curl -s http://ks.sudhaker.com/scripts/kube-base.txt | bash
 
yum install etcd flannel -y
 
cat > /etc/etcd/etcd.conf << '__EOF__'
# [member]
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
#[cluster]
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"
__EOF__
 
systemctl enable etcd
systemctl start etcd
 
sleep 5
 
echo '{"Network": "10.10.0.0/16", "SubnetLen": 24, "Backend": {"Type": "vxlan", "VNI": 1}}' | etcdctl set /atomic.io/network/config
 
etcdctl get /atomic.io/network/config
 
systemctl enable flanneld
systemctl start flanneld
 
# verify
curl -s http://kube-master:2379/v2/keys/atomic.io/network/subnets | python -m json.tool
curl -s http://kube-master:2379/v2/keys/atomic.io/network/config | python -m json.tool
 
systemctl start docker

=== NODE PREP ===

# tweak it as per your setup
curl -s http://ks.sudhaker.com/scripts/kube-base.txt | bash
 
# verify
curl -s http://kube-master:2379/v2/keys/atomic.io/network/subnets | python -m json.tool
curl -s http://kube-master:2379/v2/keys/atomic.io/network/config | python -m json.tool
 
yum install flannel -y
 
cat > /etc/sysconfig/flanneld << '__EOF__'
FLANNEL_ETCD="http://kube-master:2379"
FLANNEL_ETCD_KEY="/atomic.io/network"
__EOF__
 
systemctl enable flanneld
systemctl start flanneld
 
systemctl start docker

=== KUBE SETUP ===

Kismatic has done a great job in packaging the Kubernetes.

@ MASTER – default RPM works as it is

yum install https://repos.kismatic.com/el/7/x86_64/kismatic-repo-el-7-1.x86_64.rpm -y
yum install kubernetes-master -y
 
for SERVICE in kube-apiserver kube-scheduler kube-controller-manager kubelet
do 
    systemctl restart $SERVICE
    systemctl enable $SERVICE
done

@ NODES – minor config tweaking is needed

yum install https://repos.kismatic.com/el/7/x86_64/kismatic-repo-el-7-1.x86_64.rpm -y
yum install kubernetes-node -y
 
cat > /etc/kubernetes/node/kube-proxy.conf << '__EOF__'
###
# kubernetes proxy config
 
# default config should be adequate
KUBE_ETCD_SERVERS=""
KUBE_LOGTOSTDERR=""
KUBE_LOG_LEVEL=""
 
#Master api server http (--insecure-port) port=8080
#Master api server https (--secure-port) port=6443
KUBE_MASTER_SERVER="--master=http://kube-master:8080"
 
KUBE_PROXY_ARGS=""
__EOF__
 
cat > /etc/kubernetes/node/kubelet.conf << '__EOF__'
###
# kubernetes kubelet (node) config
 
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"
 
# The port for the info server to serve on
KUBELET_PORT="--port=10250"
 
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME=""
 
# location of the api-server
KUBELET_API_SERVERS="--api_servers=http://kube-master:8080"
 
# Add your own!
KUBELET_ARGS="--container_runtime=docker --config=/etc/kubernetes/manifests"
__EOF__
 
for SERVICE in kube-proxy kubelet
do 
    systemctl restart $SERVICE
    systemctl enable $SERVICE
done

=== VERIFY ===

[root@kube-master ~]# kubectl get nodes
NAME           STATUS    AGE
kube-node-01   Ready     1m
kube-node-02   Ready     38s
kube-node-03   Ready     12s

=== ADD DASHBOARD ===

wget https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml

API server is insecure so default won’t work; specify http://master_ip:8080; hostname (or private DNS) won’t work either.

          args:
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
            - --apiserver-host=http://192.168.1.150:8080
kubectl create namespace kube-system
kubectl create -f kubernetes-dashboard.yaml

http://{MASTER_IP}:8080/ui/
http://192.168.1.150:8080/ui/

kube-ui-1.2.0

=== RPM DETAILS (for reference) ===

[root@kube-master ~]# rpm -ql kubernetes-master
/etc/kubernetes/manifests/.gitkeep
/etc/kubernetes/master/apiserver.conf
/etc/kubernetes/master/config.conf
/etc/kubernetes/master/controller-manager.conf
/etc/kubernetes/master/kubelet.conf
/etc/kubernetes/master/scheduler.conf
/lib/systemd/system/kube-apiserver.service
/lib/systemd/system/kube-controller-manager.service
/lib/systemd/system/kube-scheduler.service
/lib/systemd/system/kubelet.service
/usr/bin/hyperkube
/usr/bin/kube-apiserver
/usr/bin/kube-controller-manager
/usr/bin/kube-scheduler
/usr/bin/kubectl
/usr/bin/kubelet
 
[root@kube-node-03 ~]# rpm -ql kubernetes-node
/etc/kubernetes/manifests/.gitkeep
/etc/kubernetes/node/config.conf
/etc/kubernetes/node/kube-proxy.conf
/etc/kubernetes/node/kubelet.conf
/lib/systemd/system/kube-proxy.service
/lib/systemd/system/kubelet.service
/usr/bin/kube-proxy
/usr/bin/kubelet

15 thoughts on “Multi-node Kubernetes on CentOS 7.x with Flannel”

  1. Hi! could you prepare an article about using Flocker in Kubernetes or better integration with OpenShift? Thanks!

  2. Dear Sudhakar,
    I get this error , on node-01 ,when i try to start flanneld service.Kindly suggest.

    [root@kube-node-01 ~]# cat /etc/redhat-release
    CentOS Linux release 7.1.1503 (Core)
    [root@kube-node-01 ~]#

    [root@kube-node-01 ~]# systemctl status flanneld.service -l
    ● flanneld.service – Flanneld overlay address etcd agent
    Loaded: loaded (/usr/lib/systemd/system/flanneld.service; enabled; vendor preset: disabled)
    Active: activating (start) since Sun 2016-05-08 00:50:58 IST; 1min 23s ago
    Main PID: 1503 (flanneld)
    Memory: 8.4M
    CGroup: /system.slice/flanneld.service
    └─1503 /usr/bin/flanneld -etcd-endpoints=http://kube-master:2379 -etcd-prefix=/atomic.io/network

    May 08 00:52:12 kube-node-01 flanneld[1503]: E0508 00:52:12.241148 01503 network.go:53] Failed to retrieve network config: client: etcd cluster is unavailable or misconfigured

  3. For the ‘curl’ command under ==Master Prep== I get a timeout for the script download. Are they still at this location?

    1. That script is hosted internally in my LAN.

      First code block has the entire content of the file: kube-base.txt ; tweak it and copy it somewhere local. Your IP etc. will be different so you can’t use my file anyway.

  4. Dear Sudhaker,
    My original comment was corrected when I returned an hour later… the timeout issue went away and all worked fine. THANKS for these very helpful posts, your time and effort in this problem space.

    Perhaps I am missing a very simple thing, which is likely as I am somewhat new to OpenShift. I am unable to access port 8080. All is up and run per the OpenShift management display. In the topographical view is see the Deployment Config, Replication Controller, 2 pods, the Service, and the route. All appear to be fine.
    My local host is my internal IP address 10.xx.xx.62, the pods are on 172.17.0.4 & 172.17.0.5, docker appears to be 172.xx.xx.238:5000, and my-node-app is:

    name my-node-app
    Hostname: my-node-app.apps.sudhaker.com – exposed on router ‘router’ 3 hours ago
    Path: none (IS THIS THE ISSUE ??? If so, how does one set this path value or more to the point “where”.)
    Service: my-node-app
    Target Port: 8080-tcp
    This target port will route to Service Port 8080 → Container Port 8080 (TCP).
    TLS Settings TLS is not enabled for this route

    1. Dewittm,

      It’s great that you took K8S instruction and used it to run a multi-node Openshift V3. I haven’t played with this possible setup yet and would be very interested in learning from your experience. I can be reached at

      Openshift recommends using Ansible for their multi-node deployment and under-the-hood uses OvS (Open vSwitch) for their SDN. I was planning to use the similar stack for my manual multi-node setup (sitting on my TODO list for a bit).

      I love manual setup from scratch which gives me full exposure of each and every component. Great to meet and work with similar minded people.

      Cheers,
      Sudhaker

  5. My apologies, for my last two posts! I worked through the initial obvious: I “assumed” that the host (i.e. 10.32.xxx.xxx) had been mapped to from the application: WRONG ON MY PART. I found the Cluster Address and found the “Colorful” app on port 8080. I will continue working through your site and be more considerate of your time by working longer before posting. THANKS AGAIN for all. – Dewittm

  6. Hi Sudhaker,

    This is the best kubernetes installer I’ve read. The latest docker release seems to have broken the master install.

    The kube-base.txt script installs docker 1.11 but ‘yum install kubernetes-master’ rpm is dependent on docker 1.10. So I had to ‘rpm -e docker-engine’ and docker-engine-selinux to install the master package.

    But then you get:
    [root@kube-master jkong]# systemctl status docker.service
    ● docker.service – Docker Application Container Engine
    Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
    Drop-In: /usr/lib/systemd/system/docker.service.d
    └─flannel.conf
    /etc/systemd/system/docker.service.d
    └─override.conf
    Active: failed (Result: exit-code) since Sun 2016-07-10 19:36:48 PDT; 4s ago
    Docs: http://docs.docker.com
    Process: 2635 ExecStart=/usr/bin/docker daemon –storage-driver=overlay $DOCKER_NETWORK_OPTIONS -H fd:// (code=exited, status=1/FAILURE)
    Main PID: 2635 (code=exited, status=1/FAILURE)

    Jul 10 19:36:48 kube-master systemd[1]: Starting Docker Application Container Engine…
    Jul 10 19:36:48 kube-master systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
    Jul 10 19:36:48 kube-master systemd[1]: Failed to start Docker Application Container Engine.
    Jul 10 19:36:48 kube-master systemd[1]: Unit docker.service entered failed state.
    Jul 10 19:36:48 kube-master systemd[1]: docker.service failed.

    Any suggestion how to fix?

  7. Hey Sudhaker,

    Found a fix for above. I removed the ‘-H fd://’ option in file override.conf. This removes the socket option for systemd systems.

    1. Ouch! Kismatic was recently acquired by Apprenda. Let me ask those guys if they still maintain the repo.

      My focus is mostly on Mesos these days so was not following on K8S.

Leave a Reply

Your email address will not be published. Required fields are marked *