Caching YUM Proxy

I often need VMs with {minimal-centos} + {docker} for my learning experiments in my basement-lab. For example, an experimental Mesosphere DC/OS cluster requires 10+ nodes (one boot, three masters, five+ agents, one public-agent). I’ve automated the build process using ansible playbook & kickstart to make my life easier (just execute a shell script, and entire cluster-farm is ready in about 20 minutes).

So far so good – but a single iteration of such build makes over 550 URL requests and transfers about 300 MB files from various YUM repositories. That’s why I wrote this caching YUM proxy which considerably speeds up my build process. Also a respectful gesture to mirror-providers who donate their valuable resource to the community.

Here are the list of repo mapping that I needed.

http://centos.mirror.constant.com/7/os/x86_64/
mapped to => http://local.sudhaker.com/centos-7-os/
 
http://centos.mirror.constant.com/7/updates/x86_64/
mapped to => http://local.sudhaker.com/centos-7-updates/
 
http://centos.mirror.constant.com/7/extras/x86_64/
mapped to => http://local.sudhaker.com/centos-7-extras/
 
http://dl.fedoraproject.org/pub/epel/7/x86_64/
mapped to => http://local.sudhaker.com/epel-7/
 
http://yum.dockerproject.org/repo/main/centos/7/
mapped to => http://local.sudhaker.com/dockerproject/

And following nginx-configuration did the magic!
Continue reading Caching YUM Proxy

Wireless AP on Raspberrey Pi 2 & Alpine Linux

Alpine Linux for Raspberry PI is my favorite mainly because of the “diskless mode”, which ensures that my sdcard won’t be touched except boot and “lbu commit” hence minor wear-n-tear for the media.

This article is the phase-1 of the building of an AP with MTIM proxy + ssl_bump. Stay tuned for squid3 + ssl_bump and other configuration

Setup: RASPBERRY PI 2 Model B + alpine-rpi-3.4.2-armhf.rpi.tar.gz

Alpine install for PI was pretty straight-forward except following issues:

#1 DHCP timeout issue that randomly leaving LAN from getting a valid IP address. The fix is adding a “udhcpc_opts -t 12” in the “eth0” section of “/etc/network/interfaces” (as shown below).

auto eth0
iface eth0 inet dhcp
        hostname pi-router
        udhcpc_opts -t 12

#2 Remote login for “root” was denied by default. The fix is changing the “PermitRootLogin” flag to “yes” in the “/etc/ssh/sshd_config”.

sed -i -e 's|^#PermitRootLogin .*$|PermitRootLogin yes|' /etc/ssh/sshd_config

Now you just need to run the following script to turn your PI into a cool wi-fi access-point.

Continue reading Wireless AP on Raspberrey Pi 2 & Alpine Linux

Simple ‘Hello World’ App on DC/OS

Prerequisites: You have a catch-all DNS configured for your given FQN (example: *.mesos.sudhaker.com) that resolves to public_node(s).

Install ‘external’ load balancer.

tee marathon-lb-external.json << '__EOF__'
{ "marathon-lb":{ "name": "marathon-lb-external", "instances": 1, "haproxy-group": "external", "role": "slave_public", "mem": 512, "cpus": 1} }
__EOF__
 
dcos package install --options=marathon-lb-external.json --yes marathon-lb

Install ‘internal’ load balancer (optional; not needed here).

tee marathon-lb-internal.json << '__EOF__'
{ "marathon-lb":{ "name": "marathon-lb-internal", "instances": 1, "haproxy-group": "internal", "role": "", "mem": 512, "cpus": 1, "bind-http-https": false} }
__EOF__
 
dcos package install --options=marathon-lb-internal.json --yes marathon-lb

Install application ‘hello-world’.

export FQN=mesos.sudhaker.com
tee dockercloud-hello-world.json << __EOF__
{
  "id": "dockercloud-hello-world",
  "container": {
    "type": "DOCKER",
    "docker": {
      "image": "dockercloud/hello-world",
      "network": "BRIDGE",
      "portMappings": [
        { "hostPort": 0, "containerPort": 80 }
      ],
      "forcePullImage":true
    }
  },
  "instances": 2,
  "cpus": 0.1,
  "mem": 128,
  "healthChecks": [{
      "protocol": "HTTP",
      "path": "/",
      "portIndex": 0,
      "timeoutSeconds": 10,
      "gracePeriodSeconds": 10,
      "intervalSeconds": 2,
      "maxConsecutiveFailures": 10
  }],
  "labels":{
    "HAPROXY_GROUP":"external",
    "HAPROXY_0_VHOST":"dockercloud-hello-world.${FQN}"
  }
}
__EOF__
 
dcos marathon app add dockercloud-hello-world.json

Browse to http://dockercloud-hello-world.${FQN}

dockercloud-hello-world

Multi-node Mesosphere DC/OS 1.7 on CentOS 7.x

Setup: CentOS 7.2 Minimal + SELINUX Disabled + Firewall Disabled + IPv6 Disabled

mesos-boot          2 CPU, 4GB RAM, 60GB HDD
mesos-master-01     2 CPU, 4GB RAM, 60GB HDD + 200GB HDD
mesos-master-02     2 CPU, 4GB RAM, 60GB HDD + 200GB HDD
mesos-master-03     2 CPU, 4GB RAM, 60GB HDD + 200GB HDD
mesos-node-01       4 CPU, 16GB RAM, 200GB HDD
mesos-node-02       4 CPU, 16GB RAM, 200GB HDD
mesos-node-03       4 CPU, 16GB RAM, 200GB HDD
mesos-node-04       4 CPU, 16GB RAM, 200GB HDD
mesos-node-05       4 CPU, 16GB RAM, 200GB HDD
mesos-node-06       4 CPU, 16GB RAM, 200GB HDD
mesos-node-07       2 CPU, 4GB RAM, 60GB HDD (public)

Common script – To be run on every machine

# add docker repo and install docker
cat > /etc/yum.repos.d/docker.repo << '__EOF__'
[docker]
name=Docker Repository - Centos $releasever
baseurl=http://yum.dockerproject.org/repo/main/centos/$releasever
enabled=1
gpgcheck=1
gpgkey=http://yum.dockerproject.org/gpg
__EOF__
 
yum install docker-engine -y
yum clean all
 
mkdir -p /etc/systemd/system/docker.service.d 
 
cat > /etc/systemd/system/docker.service.d/override.conf << '__EOF__'
[Service] 
ExecStart= 
ExecStart=/usr/bin/docker daemon --storage-driver=overlay $DOCKER_NETWORK_OPTIONS -H fd:// 
__EOF__
 
systemctl daemon-reload
systemctl enable docker
 
#systemctl start docker
 
tee -a /etc/hosts << '__EOF__'
 
192.168.1.160 mesos-boot
 
192.168.1.161 mesos-master-01
192.168.1.162 mesos-master-02
192.168.1.163 mesos-master-03
 
192.168.1.164 mesos-agent-01
192.168.1.165 mesos-agent-02
192.168.1.166 mesos-agent-03
192.168.1.167 mesos-agent-04
192.168.1.168 mesos-agent-05
192.168.1.169 mesos-agent-06
 
192.168.1.170 mesos-agent-07
 
__EOF__
 
yum install -y tar xz unzip curl ipset nfs-utils
yum clean all
 
groupadd nogroup

### To be run on bootstrap host | mesos-boot

#=== mesos-bootstrap-generate_config ===

 
mkdir /opt/dcos-setup && cd /opt/dcos-setup && curl -O https://downloads.dcos.io/dcos/testing/master/dcos_generate_config.sh
 
mkdir -p genconf
 
cat > genconf/ip-detect << '__EOF__'
#!/usr/bin/env bash
set -o nounset -o errexit
export PATH=/usr/sbin:/usr/bin:$PATH
echo $(ip addr show ens192 | grep -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' | head -1)
__EOF__
 
chmod 755 genconf/ip-detect
 
# your custom config file
cat > genconf/config.yaml << '__EOF__'
---
bootstrap_url: http://mesos-boot:8888       
cluster_name: dcos
exhibitor_storage_backend: static
ip_detect_filename: genconf/ip-detect
master_discovery: static
master_list:
- 192.168.1.161
- 192.168.1.162
- 192.168.1.163
resolvers:
- 8.8.4.4
- 8.8.8.8
__EOF__
 
bash dcos_generate_config.sh

#=== mesos-bootstrap-install-nginx ===

docker run -d --restart=unless-stopped -p 8888:80 -v /opt/dcos-setup/genconf/serve:/usr/share/nginx/html:ro --name=dcos-bootstrap-nginx nginx:alpine

#=== mesos-bootstrap-install-dcos-cli (it can be skipped or done later)

yum install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-6.noarch.rpm -y
 
yum install python python-pip python-virtualenv python34 -y
 
pip install --upgrade pip
pip install --upgrade virtualenv
 
mkdir -p /opt/dcos
cd /opt/dcos
virtualenv --python=python3.4 --quiet .
 
source bin/activate && pip install --upgrade pip dcoscli httpie
 
dcos config set core.dcos_url http://192.168.1.161
dcos auth login

#=== mesos-bootstrap-nfs-server (it can be skipped or done later)

#yum install nfs-utils
mkdir /var/nfs_share
chmod -R 777 /var/nfs_share
 
tee -a /etc/exports << '__EOF__'
/var/nfs_share    192.168.1.160/28(rw,sync,no_root_squash,no_all_squash)
__EOF__
 
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server

#=== mesos-master-install ===

mkdir /tmp/dcos && cd /tmp/dcos && curl -O http://mesos-boot:8888/dcos_install.sh && bash dcos_install.sh master

or

for n in 01 02 03; do
    ssh mesos-master-${n} "mkdir /tmp/dcos && cd /tmp/dcos && curl -O http://mesos-boot:8888/dcos_install.sh && bash dcos_install.sh master"
done

URLs
http://192.168.1.161:8181/exhibitor/v1/ui/index.html (check installation status here while masters are baking)

dcos-exhibitor

http://192.168.1.161/ (check installation status here while nodes are baking)

dcos-v1.7

FYI: You won’t see nodes yet.

#=== mesos-slave-node-install ===

mkdir -p /tmp/dcos && cd /tmp/dcos && curl -O http://mesos-boot:8888/dcos_install.sh && bash dcos_install.sh slave

or

for n in 01 02 03 04 05 06; do
    ssh mesos-agent-${n} "mkdir -p /tmp/dcos && cd /tmp/dcos && curl -O http://mesos-boot:8888/dcos_install.sh && bash dcos_install.sh slave"
done

#=== mesos-slave-public-node-install ===

mkdir -p /tmp/dcos && cd /tmp/dcos && curl -O http://mesos-boot:8888/dcos_install.sh && bash dcos_install.sh slave_public && exit

or

for n in 07; do
    ssh mesos-agent-${n} "mkdir -p /tmp/dcos && cd /tmp/dcos && curl -O http://mesos-boot:8888/dcos_install.sh && bash dcos_install.sh slave_public"
done

#=== mesos-all-nodes-nfs-install ===

And mount them on agents if we’ve shared NFS on the boot-node

for n in 01 02 03 04 05 06 07; do
    ssh mesos-agent-${n} "yum install nfs-utils -y && yum clean all && mkdir -p /mnt/nfs && echo "mesos-boot:/var/nfs_share /mnt/nfs nfs hard,bg,tcp,nointr,noac 0 0" >> /etc/fstab && mount /mnt/nfs"
done

#=== node-uninstall ===

/opt/mesosphere/bin/pkgpanda uninstall && rm -fr /opt/mesosphere

Multi-node Kubernetes on CentOS 7.x with Flannel

Mode: Multi-node setup, with Flannel, using Kismatic repo.

This is the common script (kube-base.txt) that we run on every machine. Please tweak it for your environment (mainly IPs) and drop it on some web location.

# file: kube-base.txt
# add docker repo and install docker
cat > /etc/yum.repos.d/docker.repo << '__EOF__'
[docker]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
__EOF__
 
yum install docker-engine -y
 
mkdir -p /etc/systemd/system/docker.service.d 
 
cat > /etc/systemd/system/docker.service.d/override.conf << '__EOF__'
[Service] 
ExecStart= 
ExecStart=/usr/bin/docker daemon --storage-driver=overlay $DOCKER_NETWORK_OPTIONS
__EOF__
 
systemctl daemon-reload
systemctl enable docker
 
# we'll start docker only after flannel is up and running
# systemctl start docker
 
tee -a /etc/hosts << '__EOF__'
 
192.168.1.150 kube-master
192.168.1.151 kube-node-01
192.168.1.152 kube-node-02
192.168.1.153 kube-node-03
__EOF__

=== MASTER PREP ===

# tweak it as per your setup
curl -s http://ks.sudhaker.com/scripts/kube-base.txt | bash
 
yum install etcd flannel -y
 
cat > /etc/etcd/etcd.conf << '__EOF__'
# [member]
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
#[cluster]
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"
__EOF__
 
systemctl enable etcd
systemctl start etcd
 
sleep 5
 
echo '{"Network": "10.10.0.0/16", "SubnetLen": 24, "Backend": {"Type": "vxlan", "VNI": 1}}' | etcdctl set /atomic.io/network/config
 
etcdctl get /atomic.io/network/config
 
systemctl enable flanneld
systemctl start flanneld
 
# verify
curl -s http://kube-master:2379/v2/keys/atomic.io/network/subnets | python -m json.tool
curl -s http://kube-master:2379/v2/keys/atomic.io/network/config | python -m json.tool
 
systemctl start docker

=== NODE PREP ===

# tweak it as per your setup
curl -s http://ks.sudhaker.com/scripts/kube-base.txt | bash
 
# verify
curl -s http://kube-master:2379/v2/keys/atomic.io/network/subnets | python -m json.tool
curl -s http://kube-master:2379/v2/keys/atomic.io/network/config | python -m json.tool
 
yum install flannel -y
 
cat > /etc/sysconfig/flanneld << '__EOF__'
FLANNEL_ETCD="http://kube-master:2379"
FLANNEL_ETCD_KEY="/atomic.io/network"
__EOF__
 
systemctl enable flanneld
systemctl start flanneld
 
systemctl start docker

=== KUBE SETUP ===

Kismatic has done a great job in packaging the Kubernetes.

@ MASTER – default RPM works as it is

yum install https://repos.kismatic.com/el/7/x86_64/kismatic-repo-el-7-1.x86_64.rpm -y
yum install kubernetes-master -y
 
for SERVICE in kube-apiserver kube-scheduler kube-controller-manager kubelet
do 
    systemctl restart $SERVICE
    systemctl enable $SERVICE
done

@ NODES – minor config tweaking is needed

yum install https://repos.kismatic.com/el/7/x86_64/kismatic-repo-el-7-1.x86_64.rpm -y
yum install kubernetes-node -y
 
cat > /etc/kubernetes/node/kube-proxy.conf << '__EOF__'
###
# kubernetes proxy config
 
# default config should be adequate
KUBE_ETCD_SERVERS=""
KUBE_LOGTOSTDERR=""
KUBE_LOG_LEVEL=""
 
#Master api server http (--insecure-port) port=8080
#Master api server https (--secure-port) port=6443
KUBE_MASTER_SERVER="--master=http://kube-master:8080"
 
KUBE_PROXY_ARGS=""
__EOF__
 
cat > /etc/kubernetes/node/kubelet.conf << '__EOF__'
###
# kubernetes kubelet (node) config
 
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"
 
# The port for the info server to serve on
KUBELET_PORT="--port=10250"
 
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME=""
 
# location of the api-server
KUBELET_API_SERVERS="--api_servers=http://kube-master:8080"
 
# Add your own!
KUBELET_ARGS="--container_runtime=docker --config=/etc/kubernetes/manifests"
__EOF__
 
for SERVICE in kube-proxy kubelet
do 
    systemctl restart $SERVICE
    systemctl enable $SERVICE
done

=== VERIFY ===

[root@kube-master ~]# kubectl get nodes
NAME           STATUS    AGE
kube-node-01   Ready     1m
kube-node-02   Ready     38s
kube-node-03   Ready     12s

=== ADD DASHBOARD ===

wget https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml

API server is insecure so default won’t work; specify http://master_ip:8080; hostname (or private DNS) won’t work either.

          args:
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
            - --apiserver-host=http://192.168.1.150:8080
kubectl create namespace kube-system
kubectl create -f kubernetes-dashboard.yaml

http://{MASTER_IP}:8080/ui/
http://192.168.1.150:8080/ui/

kube-ui-1.2.0

=== RPM DETAILS (for reference) ===

[root@kube-master ~]# rpm -ql kubernetes-master
/etc/kubernetes/manifests/.gitkeep
/etc/kubernetes/master/apiserver.conf
/etc/kubernetes/master/config.conf
/etc/kubernetes/master/controller-manager.conf
/etc/kubernetes/master/kubelet.conf
/etc/kubernetes/master/scheduler.conf
/lib/systemd/system/kube-apiserver.service
/lib/systemd/system/kube-controller-manager.service
/lib/systemd/system/kube-scheduler.service
/lib/systemd/system/kubelet.service
/usr/bin/hyperkube
/usr/bin/kube-apiserver
/usr/bin/kube-controller-manager
/usr/bin/kube-scheduler
/usr/bin/kubectl
/usr/bin/kubelet
 
[root@kube-node-03 ~]# rpm -ql kubernetes-node
/etc/kubernetes/manifests/.gitkeep
/etc/kubernetes/node/config.conf
/etc/kubernetes/node/kube-proxy.conf
/etc/kubernetes/node/kubelet.conf
/lib/systemd/system/kube-proxy.service
/lib/systemd/system/kubelet.service
/usr/bin/kube-proxy
/usr/bin/kubelet

My kickstart package selection for CentOS 7.2

I figured that kickstart was ignoring my “–nodefaults” request so I ended up peeking into “repodata/*-comps.xml.gz” and requesting to exclude “default” & “optional” packages individually (many will still show-up because of dependencies).

Here is my hack for having a more minimal cent-os.

%packages --excludedocs --nobase
@core --nodefaults
chrony
lvm2
-aic94xx-firmware
-alsa-firmware
-bfa-firmware
-dracut-config-rescue
-ivtv-firmware
-iwl100-firmware
-iwl1000-firmware
-iwl105-firmware
-iwl135-firmware
-iwl2000-firmware
-iwl2030-firmware
-iwl3160-firmware
-iwl3945-firmware
-iwl4965-firmware
-iwl5000-firmware
-iwl5150-firmware
-iwl6000-firmware
-iwl6000g2a-firmware
-iwl6000g2b-firmware
-iwl6050-firmware
-iwl7260-firmware
-iwl7265-firmware
-kernel-tools
-libertas-sd8686-firmware
-libertas-sd8787-firmware
-libertas-usb8388-firmware
-libsysfs
-linux-firmware
-microcode_ctl
-NetworkManager
-NetworkManager-team
-NetworkManager-tui
-postfix
-ql2100-firmware
-ql2200-firmware
-ql23xx-firmware
-rdma
-dracut-config-generic
-dracut-fips
-dracut-fips-aesni
-dracut-network
-openssh-keycat
-selinux-policy-mls
-tboot
-rubygem-abrt
-abrt-addon-ccpp
-abrt-addon-python
-abrt-cli
-abrt-console-notification
-bash-completion
-blktrace
-bridge-utils
-bzip2
#-chrony
-cryptsetup
-dmraid
-dosfstools
-ethtool
-fprintd-pam
-gnupg2
-hunspell
-hunspell-en
-kpatch
-ledmon
-libaio
-libreport-plugin-mailx
-libstoragemgmt
#-lvm2
-man-pages
-man-pages-overrides
-mdadm
-mlocate
-mtr
-nano
-ntpdate
-pinfo
-plymouth
-pm-utils
-rdate
-rfkill
-rng-tools
-rsync
-scl-utils
-setuptool
-smartmontools
-sos
-sssd-client
-strace
-sysstat
-systemtap-runtime
-tcpdump
-tcsh
-teamd
-time
-unzip
-usbutils
-vim-enhanced
-virt-what
-wget
-which
-words
-xfsdump
-xz
-yum-langpacks
-yum-plugin-security
-yum-utils
-zip
-acpid
-audispd-plugins
-augeas
-brltty
-ceph-common
-cryptsetup-reencrypt
-device-mapper-persistent-data
-dos2unix
-dumpet
-genisoimage
-gpm
-i2c-tools
-kabi-yum-plugins
-libatomic
-libcgroup
-libcgroup-tools
-libitm
-libstoragemgmt-netapp-plugin
-libstoragemgmt-nstor-plugin
-libstoragemgmt-smis-plugin
-libstoragemgmt-targetd-plugin
-libstoragemgmt-udev
-linuxptp
-logwatch
-mkbootdisk
-mtools
-ncurses-term
-ntp
-oddjob
-pax
-prelink
-PyPAM
-python-volume_key
-redhat-lsb-core
-redhat-upgrade-dracut
-redhat-upgrade-tool
-rsyslog-gnutls
-rsyslog-gssapi
-rsyslog-relp
-sgpio
-sox
-squashfs-tools
-star
-tmpwatch
-udftools
-uuidd
-volume_key
-wodim
-x86info
-yum-plugin-aliases
-yum-plugin-changelog
-yum-plugin-tmprepo
-yum-plugin-verify
-yum-plugin-versionlock
-zsh

%end

And here is what I get, 232 packages and slightly less than 750M installed.

[root@dell-cs24-n1 ~]# rpm -qa | wc -l
232
[root@dell-cs24-n1 ~]# df -k
Filesystem                 1K-blocks   Used Available Use% Mounted on
/dev/mapper/vg_all-lv_root 206292968 760604 195030220   1% /

I’m struggling to understand the meaning of MINIMAL.

In case you are curious, following 9 default/optional packages got included in this minimal build because of dependencies.

device-mapper-persistent-data
dracut-network
ethtool
gnupg2
libaio
linux-firmware
virt-what
which
xz

Install the latest Mesos on CentOS 7.x

Prerequisites: CentOS 7.x minimal install (tested on 7.2) + docker

Setup: single node (all-in-one), Mesosphere repo

yum install http://repos.mesosphere.io/el/7/noarch/RPMS/mesosphere-el-repo-7-1.noarch.rpm
 
yum install docker mesosphere-zookeeper mesos marathon chronos  -y
 
echo 'docker,mesos' > /etc/mesos-slave/containerizers
 
for SERVICES in docker zookeeper mesos-master mesos-slave marathon chronos; do
    systemctl enable $SERVICES
    systemctl restart $SERVICES
done
 
firewall-cmd --permanent --zone=public --add-port=5050/tcp # mesos-master
firewall-cmd --permanent --zone=public --add-port=5051/tcp # mesos-slave
firewall-cmd --permanent --zone=public --add-port=8080/tcp # marathon
firewall-cmd --permanent --zone=public --add-port=4400/tcp # chronos
firewall-cmd --reload

And then browse to http://IP_ADDRESS:8080/

mesos_marathon

Build DOCKER image using Openshift S2I

Source-To-Image (S2I), as the name implies, is responsible for transforming your application source into an executable Docker image that we can later run inside of OpenShift v3 or directly via `docker run`.

We can find the main project at github and bunch of sti templates for language specific builds.

https://github.com/openshift/sti-php
https://github.com/openshift/sti-ruby
https://github.com/openshift/sti-wildfly
https://github.com/openshift/sti-perl
https://github.com/openshift/sti-python
https://github.com/openshift/sti-nodejs

We need to get the s2i tool from github release and include in PATH.

[sudhaker@dell-cs24-n2 ~]$ wget https://github.com/openshift/source-to-image/releases/download/v1.0.5/source-to-image-v1.0.5-b731f95-linux-amd64.tar.gz
...
[sudhaker@dell-cs24-n2 ~]$ ll
total 5860
drwxrwxr-x. 2 sudhaker sudhaker    4096 Feb 26 21:04 bin
-rw-rw-r--. 1 sudhaker sudhaker 5992810 Feb 18 13:23 source-to-image-v1.0.5-b731f95-linux-amd64.tar.gz
[sudhaker@dell-cs24-n2 ~]$ cd bin; tar zxf ../source-to-image-*.tar.gz; rm ../source-to-image-*.tar.gz; cd -

And the proceed with cooking docker image from a source repository.

[sudhaker@dell-cs24-n2 ~]$ sudo docker images | grep -v openshift
REPOSITORY                                   TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
[sudhaker@dell-cs24-n2 ~]$ sudo ~/bin/s2i build git://github.com/sudhaker/my-node-app openshift/nodejs-010-centos7 my-node-app
I0228 19:19:01.037320 01056 clone.go:32] Downloading "git://github.com/sudhaker/my-node-app" ...
I0228 19:19:01.356601 01056 install.go:236] Using "assemble" installed from "image:///usr/libexec/s2i/assemble"
I0228 19:19:01.356684 01056 install.go:236] Using "run" installed from "image:///usr/libexec/s2i/run"
I0228 19:19:01.356740 01056 install.go:236] Using "save-artifacts" installed from "image:///usr/libexec/s2i/save-artifacts"
---> Installing application source
---> Building your Node application from source
E0228 19:19:05.417666 01056 util.go:91] npm info it worked if it ends with ok
E0228 19:19:05.417827 01056 util.go:91] npm info using npm@1.4.28
E0228 19:19:05.417915 01056 util.go:91] npm info using node@v0.10.40
E0228 19:19:05.608556 01056 util.go:91] npm info preinstall my-node-app@0.0.1
E0228 19:19:05.618591 01056 util.go:91] npm info build /opt/app-root/src
E0228 19:19:05.619139 01056 util.go:91] npm info linkStuff my-node-app@0.0.1
E0228 19:19:05.620359 01056 util.go:91] npm info install my-node-app@0.0.1
E0228 19:19:05.622702 01056 util.go:91] npm info postinstall my-node-app@0.0.1
E0228 19:19:05.623821 01056 util.go:91] npm info prepublish my-node-app@0.0.1
E0228 19:19:05.628719 01056 util.go:91] npm info ok
[sudhaker@dell-cs24-n2 bin]$ sudo docker images | grep -v openshift
REPOSITORY                                   TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
my-node-app                                  latest              85901d40c60b        55 seconds ago      438.7 MB

Let’s test this docker image

[sudhaker@dell-cs24-n2 ~]$ sudo docker run --detach --publish 8080:8080 my-node-app
92c707e8bedd4d08e5e9f2edc432b1febb700102b62dfb98349fd2217e5d342e
[sudhaker@dell-cs24-n2 ~]$ curl http://localhost:8080/
My Node App v-1.0 !! Server : 92c707e8bedd

My basement data-center

IMAG0972

Hardware Setup

  • Biostar NM70I-847 Intel Celeron 847 | my 24×7 file-server, web-server
  • Dell CS24-SC Server | 6 Servers | Each @ Xeon L5420 8 cores 2.5GHz, two servers with 40GB + others 24GB RAM
  • Dell PowerEdge C1100 (CS24-TY) | 1 Server | Xeon X5650 12 cores 2.66GHz, 96GB RAM (added recently)

Networking Setup

  • Verizon Fios @ 50/50 mbps
  • Servers are sitting in outer DMZ (192.168.1.0/24)
  • NIC and BMC ports are assigned static IP & hostname
  • Everything is connected through a Gigabit switch

Software Setup

  • Celeron server runs: samba, docker/nginx, docker/gitlab
  • Dell servers are rebuilt frequently using technique mentioned here. CentOS distribution is catched locally using technique mentioned TODO.

And I can control them from my Android phone.

IPMI1