Nov 13, 2016

First experiments with Apache ZooKeeper - a distributed Key-Value Store.

Apache ZooKeeper is a distributed Key/Value Store and more, which can be used as a kind of
registry for the currently modern MicroServices architecture.

My first attempts were to let 3 instances run as a Docker Container.

As the base for my start, I used the available docker image jplock/zookeeper from the docker registry.

Before starting with the zookeeper containers, I built my own image by adding my own configuration  for zookeeper:

FROM jplock/zookeeper
MAINTAINER ......

ADD conf/zoo.cfg /opt/zookeeper/conf/zoo.cfg

The content of zoo.cfg is:

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181
server.1=zoo1:2888:3888
server.2=zoo2:2888:3888
server.3=zoo2:2888:3888

And the containers are started up with the help of docker-compose with the following 
docker-compose.yml configuration: 

version: '2'

services:
  zoomaster:
    image: ewer/zookeeper
    ports:
      - "2181:2181"
    expose:
      - "2181"
      - "2888"
      - "3888"
    labels:
      zookeeper: master
    volumes:
      - /c/Users/name/docker_volumes/zoo1:/tmp/zookeeper
    container_name: zoo1
  zooclient2:
    image: ewer/zookeeper
    ports:
      - "2182:2181"
    expose:
        - "2181"
        - "2888"
        - "3888"
    labels:
      zookeeper: master
    volumes:
      - /c/Users/name/docker_volumes/zoo2:/tmp/zookeeper
    container_name: zoo2
  zooclient3:
    image: ewer/zookeeper
    ports:
      - "2183:2181"
    expose:
      - "2181"
      - "2888"
      - "3888"
    labels:
      zookeeper: master
    volumes:
      - /c/Users/name/docker_volumes/zoo3:/tmp/zookeeper
    container_name: zoo3

As you can see in the volumes, per default only the homedirectory is mounted into the 
boot2docker Image of the docker engine, using the cygwin path 
A precondition zookeeper needs is a file called myid in /tmp/zookeeper with sequential numbers. 


After creating a docker machine on windows with 
docker-machine.exe create java

and setting the environment by the help of 
docker-machine.exe env java 


docker-compose up -d 

After installing a local version of zookeeper to get the commandline client the first test could be 
done: 

The IP-Adress is the one from the docker-machine ( retrieved by docker-machine.exe env java )
and the port is sequentially configured for each container in the docker-compose.yml

Connecting to the container zoo1 and creating some data: 
PS C:\dev\Docker\zookeeper\zookeeper-3.4.9\bin> ./zkCli.cmd -server 192.168.99.100:2181
[zk: 192.168.99.100:2181(CONNECTED) 19] create /vsop 1
Created /vsop
[zk: 192.168.99.100:2181(CONNECTED) 20] create /vsop/eaiserver 1
Created /vsop/eaiserver
[zk: 192.168.99.100:2181(CONNECTED) 21] create /vsop/eaiserver/port  8181
Created /vsop/eaiserver/port
[zk: 192.168.99.100:2181(CONNECTED) 22] ls /vsop
[eaiserver]
[zk: 192.168.99.100:2181(CONNECTED) 23] ls /vsop/eaiserver/port
[]
[zk: 192.168.99.100:2181(CONNECTED) 24] get /vsop/eaiserver/port
8181

Now connect to the container zoo2 and check the data there: 

PS C:\dev\Docker\zookeeper\zookeeper-3.4.9\bin> ./zkCli.cmd -server 192.168.99.100:2182
[zk: 192.168.99.100:2182(CONNECTED) 24] get /vsop/eaiserver/port
8181
cZxid = 0x60000000d
ctime = Sat Nov 12 18:42:15 MST 2016
mZxid = 0x60000000d
mtime = Sat Nov 12 18:42:15 MST 2016
pZxid = 0x60000000d
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 4
numChildren = 0


Nov 6, 2016

Docker swarm experiments

Installing the latest docker toolbox on windows.

On some later Windows 10 builds there is also a native Docker installation possible, but I'm on
Windows 8.1

Create the first docker machines:

docker-machine.exe create -d virtualbox master
docker-machine.exe create -d virtualbox client1
docker-machine.exe create -d virtualbox client2

After the docker hosts are created, you can check that they are up and running:

PS C:\Users\eer> docker-machine.exe ls
NAME      ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER    ERRORS
client    -        virtualbox   Stopped                                       Unknown
client1   -        virtualbox   Running   tcp://192.168.99.101:2376           v1.12.3
client2   -        virtualbox   Running   tcp://192.168.99.102:2376           v1.12.3
default   -        virtualbox   Stopped                                       Unknown
java      -        virtualbox   Stopped                                       Unknown
master    -        virtualbox   Running   tcp://192.168.99.100:2376           v1.12.3


You can also create the swarm cluster directly by creating the docker machines with the corresponding options. For this see Arun Gupta's blog post: http://blog.arungupta.me/docker-machine-swarm-compose-couchbase-wildfly/


For creating the swarm master on the master machine, connect the docker client to the master
machine. Here for Powershell:

PS C:\Users\eer> docker-machine.exe env master
$Env:DOCKER_TLS_VERIFY = "1"
$Env:DOCKER_HOST = "tcp://192.168.99.100:2376"
$Env:DOCKER_CERT_PATH = "C:\Users\eer\.docker\machine\machines\master"
$Env:DOCKER_MACHINE_NAME = "master"
# Run this command to configure your shell:
# & "C:\Program Files\Docker Toolbox\docker-machine.exe" env master | Invoke-Expression

Execute the suggested command to prepare the environment in the shell for a direct connection to
the docker host master with the docker command:

PS C:\Users\eer>  & "C:\Program Files\Docker Toolbox\docker-machine.exe" env master | Invoke-Expression
PS C:\Users\eer> docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
PS C:\Users\eer> docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: 1.12.3

Now we start to create the swarm master on the master docker host:

PS C:\Users\eer> docker swarm init --advertise-addr eth1
Swarm initialized: current node (3d9a1swucqjkv88zq0y3ji4ip) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join \
    --token SWMTKN-1-49j9tmbt1djyqo3kztrku8ix8b4zv7ao5o1aaeft6v016rnesw-7ykp2ma39r58xherx3fhvbdl1 \
    192.168.99.100:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Now switch over to the cliente1 and client2 by setting the environment with the help of
docker-machine.exe env client[1|2]

PS C:\Users\eer> docker-machine.exe env client1
$Env:DOCKER_TLS_VERIFY = "1"                                           
$Env:DOCKER_HOST = "tcp://192.168.99.101:2376"                         
$Env:DOCKER_CERT_PATH = "C:\Users\eer\.docker\machine\machines\client1"
$Env:DOCKER_MACHINE_NAME = "client1"                                   
# Run this command to configure your shell:                           
# & "C:\Program Files\Docker Toolbox\docker-machine.exe" env client1 | Invoke-Expression                                 
                                 
PS C:\Users\eer> & "C:\Program Files\Docker Toolbox\docker-machine.exe" env client1 | Invoke-Expression                                                   

PS C:\Users\eer> docker swarm join --token SWMTKN-1-49j9tmbt1djyqo3kztrku8ix8b4zv7ao5o1aaeft6v016rnesw-7ykp2ma39r58xherx3fhvbdl1 192.168.99.100:2377      
This node joined a swarm as a worker.                                                                                                                     
PS C:\Users\eer> docker-machine.exe env client2                       
$Env:DOCKER_TLS_VERIFY = "1"                                           
$Env:DOCKER_HOST = "tcp://192.168.99.102:2376"                         
$Env:DOCKER_CERT_PATH = "C:\Users\eer\.docker\machine\machines\client2"
$Env:DOCKER_MACHINE_NAME = "client2"                                   
# Run this command to configure your shell:                           
# & "C:\Program Files\Docker Toolbox\docker-machine.exe" env client2 | Invoke-Expression                                 

PS C:\Users\eer> & "C:\Program Files\Docker Toolbox\docker-machine.exe" env client2 | Invoke-Expression                                                   

PS C:\Users\eer> docker swarm join --token SWMTKN-1-49j9tmbt1djyqo3kztrku8ix8b4zv7ao5o1aaeft6v016rnesw-7ykp2ma39r58xherx3fhvbdl1 192.168.99.100:2377      
This node joined a swarm as a worker.                                                                                                                    
Now you can check the status of the cluster:

PS C:\Users\eer> docker node ls
ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
3d9a1swucqjkv88zq0y3ji4ip *  master    Ready   Active        Leader
8xn6lpv4wikz6bek2so3n0hxy    client1   Ready   Active
aw2u3zo6h86dlmuj2bj901wp5    client2   Ready   Active


Now create a service:
PS C:\Users\eer> docker service create alpine sleep 50000
777p6xwqyscohd6gfg8hr6gn4

Here the service is not named, so check for the name:
PS C:\Users\eer> docker service ls
ID            NAME       REPLICAS  IMAGE   COMMAND
777p6xwqysco  evil_shaw  5/5       alpine  sleep 50000

Now we can scale up the service:
PS C:\Users\eer> docker service scale evil_shaw=5
evil_shaw scaled to 5

And now check the instances:
PS C:\Users\eer> docker service ps evil_shaw
ID                         NAME         IMAGE   NODE     DESIRED STATE  CURRENT STATE               ERROR
e879pkhutr20jsiyro2q0hyf8  evil_shaw.1  alpine  master   Running        Running about a minute ago
55rxpvnjcucu4dvf4u8q3h086  evil_shaw.2  alpine  client1  Running        Running 43 seconds ago
2gz1fc8jatnrcev6dj5pcak8d  evil_shaw.3  alpine  client1  Running        Running 43 seconds ago
8gnyvfiinjcvqj8cakffca54r  evil_shaw.4  alpine  master   Running        Running 50 seconds ago
76ys2l08mv277hvmtco38ygia  evil_shaw.5  alpine  client2  Running        Running 42 seconds ago



Nov 2, 2016

What to do, when svn cleanup fails

I'm using TortoiseSVN on Windows. Today the "svn update" crashed somehow and after this
"svn cleanup" could not finish successfully.

I found a hint here: http://stackoverflow.com/questions/158664/what-to-do-when-svn-cleanup-fails

Started DBeaver and created a DB-Connection to the .svn/wc.db.
After opening the database, I removed everything from the WORK_QUEUE table and after this,
the "svn cleanup" worked again. This prevented me from doing a full checkout again.