Tuesday, June 28, 2016

Elastic Beats (Docker)



To complete as images shown below. Follow all steps on DigitalOcean.com.

Image from https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-ubuntu-14-04



Beats (client) send any logs data to Logstash (Server)

1) Download Beats from Elastic.co and install

$ sudo dpkg -i filebeat_1.2.3_amd64.deb
$ sudo apt-get install -f

-- or --

$ sudo echo "deb https://packages.elastic.co/beats/apt stable main" |  sudo tee -a /etc/apt/sources.list.d/beats.list
$ sudo wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
$ sudo apt-get update
$ sudo apt-get install filebeat

2) Configure Filebeats

Remote copy public key from Logstash container


nutt@nutt-pc:~/pki/tls/certs$ scp ubuntu@10.0.2.41:docker/logstash/config/pki/tls/certs/logstash-forwarder.crt .                                                  
logstash-forwarder.crt                                                                                                          100% 1229     1.2KB/s   00:00    
nutt@nutt-pc:~/pki/tls/certs$ ls
logstash-forwarder.crt

sudo vi /etc/filebeat/filebeat.yml

filebeat:
  prospectors:
    paths:
      - /var/log/auth.log
      - /var/log/syslog
      #- /var/log/*.log
      document_type: syslog
  output:
      #elasticsearch:
      logstash:
        hosts: ["10.0.2.41:5044"]
        bulk_max_size: 1024
        tls:
          certificate_authorities: ["/home/nutt/pki/tls/certs/logstash-forwarder.crt"]


** Please aware using tab character in yml file may be failed running









Logstash basic configuration (Docker)


Configuration files

logstash.conf : have 3 section
1) input - Standard input, log file and Filebeat etc.
2) filter - filter stream contents
3) output  - to elasticsearch

input { stdin { } }
output { stdout { } }
-- or --
input { stdin { } }
output {
  elasticsearch {
    hosts => ["esearch:9200"]
  }
}
--or--
input {
  beats {
    port => 5044
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}
filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}
output {
  elasticsearch {
    hosts => ["esearch:9200"]
    sniffing => true
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

Run process

$ docker run --restart=always --expose=5044 --name logstash-es -d -p 10.0.2.41:5044:5044 --net="my-net" --add-host="esearch:172.28.5.1" --mac-address="c2:00:c6:bb:c8:e2" --ip="172.28.5.3" -v /home/ubuntu/docker/logstash/config:/config logstash:2.3.3-1 logstash -f /config/logstash.conf

Generate SSL Certificates
This case we generate key on docker host rest logstash container

** Important : must change openssl.conf as

...
[ v3_ca ]
subjectAltName = IP: 10.0.2.41
...


ubuntu@node1:~/docker/logstash/config/pki/tls$ sudo vi /etc/ssl/openssl.cnf 
ubuntu@node1:~/docker/logstash/config/pki/tls$ 
ubuntu@node1:~/docker/logstash/config/pki/tls$ mkdir certs private
ubuntu@node1:~/docker/logstash/config/pki/tls$ sudo openssl req -config /etc/ssl/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
Generating a 2048 bit RSA private key
.................................+++
.....................+++
writing new private key to 'private/logstash-forwarder.key'
-----
Remark: *.key are private key and *.crt are public key can be distributed to Beat agent (Client)

Change configuration file

input {
  beats {
    port => 5044
    ssl => true
    ssl_certificate => "/config/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/config/pki/tls/private/logstash-forwarder.key"
  }
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

output {
  elasticsearch {
    hosts => ["esearch:9200"]
    sniffing => true
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

Restart container

$ docker stop logstash-es
$ docker start logstash-es





Filter customization
alternatively create by Grok Constructor


filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "\A%{SYSLOGTIMESTAMP:syslog_timestamp}%{SPACE}%{SYSLOGHOST:syslog_hostname}%{SPACE}%{SYSLOGPROG}: %{GREEDYDATA:syslog_message}" }
      add_field => {
        "syslog_program" => "%{program}"
        "syslog_pid" => "%{pid}"
        "received_at" => "%{@timestamp}"
        "received_from" => "%{host}"
      }
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}




Docker Troubleshooting



Docker for Windows (beta release)


1. MobyLinuxVM on Microsoft Hyper-V can not start after restart machines


Workaround:
Open command windows with Administive priviledge and run

C:\bcdedit /set hypervisorlaunchtype Auto

Restart windows



Create Dockerfile


Format:
INSTRUCTION arguments

Comment :


Directive (special type of comment) :
directive=value (Not show when building step)
# escape=\ (default - Allow an instructions span multi lines)

FROM ImageName
This is a first instruction to reference a base image. (Must have)

Environment replacement

ADD

COPY

ENV
 ENV abc=hello
 ENV abc=bye def=$abc
 ENV ghi=$abc

EXPOSE

LABEL

USER

WORKDIR

VOLUME

STOPSIGNAL

Instructions

# Initial from base image
FROM <image>:[<tag>|<digest>]

# Set an author name
MAINTAINER <name>

# 1) Command run in shell form default is /bin/sh -c on Linux or cmd /S /C on Windows
RUN <command>

Ex.
RUN /bin/bash -c 'source $HOME/.bashrc ;\
echo $HOME'

RUN /bin/bash -c 'source $HOME/.bashrc ; echo $HOME'

# 2) Command run in exec form
RUN ["executable", "param1", "param2"]

Ex.
RUN ["/bin/bash", "-c", "echo hello"]
RUN [ "echo", "$HOME" ]
RUN [ "sh", "-c", "echo $HOME" ]

CMD
This is build an execution container so,
if omit this can execute container with ENTRYPOINT instead.

1) Shell form
CMD command param1 param2

2) Execution form
CMD ["executable","param1","param2"]

3) Default parameters to ENTRYPOINT
CMD ["param1","param2"]

# Specific metadata to container and show this label by docker inspect command
LABEL <key>=<value> <key>=<value> <key>=<value> ...
Ex.
LABEL "com.example.vendor"="ACME Incorporated"
LABEL com.example.label-with-value="foo"
LABEL version="1.0"
LABEL description="This text illustrates \
that label-values can span multiple lines."

# Inform Docker this container listens on specificed ports.
# But must publish this port for accept request outside container with -p or -P in docker run command
EXPOSE
EXPOSE <port> [<port>...]

ENV <key> <value>
ENV <key>=<value> ...

#  Copies new files, directories or remote file URLs
# All new files and directories are created with a UID and GID of 0
ADD <src>... <dest>
ADD ["<src>",... "<dest>"]

COPY <src>... <dest>
COPY ["<src>",... "<dest>"]

ENTRYPOINT ["executable", "param1", "param2"]
ENTRYPOINT command param1 param2
Ex.
ENTRYPOINT ["top", "-b"]
CMD ["-c"]

ENTRYPOINT ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]

Combination of CMD and ENTRYPOINT

No ENTRYPOINT ENTRYPOINT exec_entry p1_entry ENTRYPOINT["exec_entry", "p1_entry"]
No CMD Not allow /bin/sh -c exec_entry p1_entry exec_entry p1_entry
CMD["exec_cmd", "p1_cmd"] exec_cmd p1_cmd /bin/sh -c exec_entry p1_entry exec_cmd p1_cmd exec_entry p1_entry exec_cmd p1_cmd
CMD["p1_cmd", "p2_cmd"] p1_cmd p2_cmd /bin/sh -c exec_entry p1_entry p1_cmd p2_cmd exec_entry p1_entry p1_cmd p2_cmd
CMD exec_cmd p1_cmd /bin/sh -c exec_cmd p1_cmd /bin/sh -c exec_entry p1_entry /bin/sh -c exec_cmd p1_cmd exec_entry p1_entry /bin/sh -c exec_cmd p1_cmd

# Create mount point
VOLUME ["/data"]

# Specific user that execute docker run, CMD and ENTRYPOINT
USER daemon

WORKDIR /path/to/workdir

WORKDIR /a
WORKDIR b
WORKDIR c
current directory after run three command above is /a/b/c

# Define a parameter can pass at build time with --build-arg <varname>=<value>
ARG <name>[=<default value>]

The last one

ONBUILD [INSTRUCTION]

STOPSIGNAL signal

HEALTHCHECK [OPTIONS] CMD command
HEALTHCHECK NONE

# Default shell-form
SHELL ["executable", "parameters"]




Monday, June 27, 2016

Elasticsearch first drive (Docker)



Download sample dataset from Elastic.co

Load data


nutt@nutt-pc:~/elastic/dataset$ head -10 accounts.json 
{"index":{"_id":"1"}}
{"account_number":1,"balance":39225,"firstname":"Amber","lastname":"Duke","age":32,"gender":"M","address":"880 Holmes Lane","employer":"Pyrami","email":"amberduke@pyrami.com","city":"Brogan","state":"IL"}
{"index":{"_id":"6"}}
{"account_number":6,"balance":5686,"firstname":"Hattie","lastname":"Bond","age":36,"gender":"M","address":"671 Bristol Street","employer":"Netagy","email":"hattiebond@netagy.com","city":"Dante","state":"TN"}
{"index":{"_id":"13"}}
{"account_number":13,"balance":32838,"firstname":"Nanette","lastname":"Bates","age":28,"gender":"F","address":"789 Madison Street","employer":"Quility","email":"nanettebates@quility.com","city":"Nogal","state":"VA"}
{"index":{"_id":"18"}}
{"account_number":18,"balance":4180,"firstname":"Dale","lastname":"Adams","age":33,"gender":"M","address":"467 Hutchinson Court","employer":"Boink","email":"daleadams@boink.com","city":"Orick","state":"MD"}
{"index":{"_id":"20"}}
{"account_number":20,"balance":16418,"firstname":"Elinor","lastname":"Ratliff","age":36,"gender":"M","address":"282 Kings Place","employer":"Scentric","email":"elinorratliff@scentric.com","city":"Ribera","state":"WA"}

nutt@nutt-pc:~/elastic/dataset$ curl -XPOST 'node1.maas:9200/bank/account/_bulk?pretty' --data-binary "@accounts.json"
nutt@nutt-pc:~/elastic/dataset$ curl 'node1.maas:9200/_cat/indices?v'                  
health status index   pri rep docs.count docs.deleted store.size pri.store.size 
yellow open   .kibana   1   1          1            0      3.1kb          3.1kb 
yellow open   bank      5   1       1000            0    442.2kb        442.2kb 


Access data

curl -X<REST Verb> <Node>:<Port>/<Index>/<Type>/<ID>
-or-
Chrome browser + Sense plugins

GET _search
{
   "query": {
      "match_all": {}
   }
}
GET _aliases
GET /_cat/health?v
GET /_cat/nodes?v
PUT /customer?pretty
GET /_cat/indices?v
PUT /customer/external/1?pretty
{
  "name": "John Doe"
}
GET /customer/external/1?pretty
DELETE /customer?pretty
GET /_cat/indices?v
DELETE /customer
PUT /customer/external/1?pretty
{
    "name": "John Doe"
}
GET /customer/external/1?pretty
PUT /customer/external/1?pretty
{
    "name": "Jane Doe"
}
PUT /customer/external/2?pretty
{
    "name": "Jane Doe"
}
POST /customer/external?pretty
{
    "name": "Jane Doe"
}
GET /customer/_search
POST /customer/external/1/_update?pretty
{
    "doc": { "name": "Jane Doe" }
}
POST /customer/external/1/_update?pretty
{
    "doc": { "name": "Jane Doe", "age": 20 }
}
POST /customer/external/1/_update?pretty
{
    "script" : "ctx._source.age += 5"
}
** WARNING: The last command use inline scripting that disable by default (groovy script language)


{
   "error": {
      "root_cause": [
         {
            "type": "remote_transport_exception",
            "reason": "[Ozymandias][172.28.5.1:9300][indices:data/write/update[s]]"
         }
      ],
      "type": "illegal_argument_exception",
      "reason": "failed to execute script",
      "caused_by": {
         "type": "script_exception",
         "reason": "scripts of type [inline], operation [update] and lang [groovy] are disabled"
      }
   },
   "status": 400
}

Change "config/elasticsearch.yml" to allow scripting but attended to use scripting in non-security environment.

script.inline: true*
script.indexed: true*

* Both have a possible values true|false|sandbox


ubuntu@node1:~/docker/elasticsearch/config$ docker ps
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS                                                NAMES
11b8fd312d32        elasticsearch:2.3.3   "/docker-entrypoint.s"   29 minutes ago      Up 29 minutes       10.0.2.41:9200->9200/tcp, 10.0.2.41:9300->9300/tcp   esearch
d353dbbe21d3        kibana:4.5.1          "/docker-entrypoint.s"   8 hours ago         Up About an hour    10.0.2.41:5601->5601/tcp                             kibana-es
ubuntu@node1:~/docker/elasticsearch/config$ docker stop esearch
esearch
ubuntu@node1:~/docker/elasticsearch/config$ nano elasticsearch.yml 
ubuntu@node1:~/docker/elasticsearch/config$ cat elasticsearch.yml 
network.host: 0.0.0.0
script.inline: true
script.indexed: true
ubuntu@node1:~/docker/elasticsearch/config$ docker start esearch esearch







Friday, June 24, 2016

Elasticsearch migrates data and configuration to docker host directory


Method 1

$docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH

Connect to a running Elasticsearch container

ubuntu@node1:~$ docker ps
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS                                                NAMES
d353dbbe21d3        kibana:4.5.1          "/docker-entrypoint.s"   2 hours ago         Up 2 hours          10.0.2.41:5601->5601/tcp                             kibana-es
f3dbc0e04fc7        elasticsearch:2.3.3   "/docker-entrypoint.s"   2 hours ago         Up 37 seconds       10.0.2.41:9200->9200/tcp, 10.0.2.41:9300->9300/tcp   esearch
ubuntu@node1:~$ 
ubuntu@node1:~$ docker exec -it --user elasticsearch esearch bash
elasticsearch@f3dbc0e04fc7:/usr/share/elasticsearch$ id                                                                                       
uid=105(elasticsearch) gid=108(elasticsearch) groups=108(elasticsearch)
elasticsearch@f3dbc0e04fc7:/usr/share/elasticsearch$

Default path of config, data and logs are "/usr/share/elasticsearch"

elasticsearch@f3dbc0e04fc7:/usr/share/elasticsearch$ tar cf /tmp/data.tar data
elasticsearch@f3dbc0e04fc7:/usr/share/elasticsearch$ tar cf /tmp/config.tar config
elasticsearch@f3dbc0e04fc7:/usr/share/elasticsearch$ tar cf /tmp/logs.tar logs    
elasticsearch@f3dbc0e04fc7:/usr/share/elasticsearch$ ls -l /tmp/*.tar
-rw-r--r-- 1 elasticsearch elasticsearch  10240 Jun 24 12:47 /tmp/config.tar
-rw-r--r-- 1 elasticsearch elasticsearch 614400 Jun 24 12:46 /tmp/data.tar
-rw-r--r-- 1 elasticsearch elasticsearch  10240 Jun 24 12:47 /tmp/logs.tar
elasticsearch@f3dbc0e04fc7:/usr/share/elasticsearch$ exit


ubuntu@node1:~/docker/elasticsearch/backup$ docker exec esearch bash -c "ls -l /tmp/*.tar"
-rw-r--r-- 1 elasticsearch elasticsearch  10240 Jun 24 12:47 /tmp/config.tar
-rw-r--r-- 1 elasticsearch elasticsearch 614400 Jun 24 12:46 /tmp/data.tar
-rw-r--r-- 1 elasticsearch elasticsearch  10240 Jun 24 12:47 /tmp/logs.tar
ubuntu@node1:~/docker/elasticsearch/backup$ 
ubuntu@node1:~/docker/elasticsearch/backup$ docker cp esearch:/tmp/config.tar .
ubuntu@node1:~/docker/elasticsearch/backup$ docker cp esearch:/tmp/data.tar .
ubuntu@node1:~/docker/elasticsearch/backup$ docker cp esearch:/tmp/logs.tar .
ubuntu@node1:~/docker/elasticsearch/backup$ ls -l *.tar
-rw-r--r-- 1 ubuntu ubuntu  10240 Jun 24 19:47 config.tar
-rw-r--r-- 1 ubuntu ubuntu 614400 Jun 24 19:46 data.tar
-rw-r--r-- 1 ubuntu ubuntu  10240 Jun 24 19:47 logs.tar


Method 2 method with a tar command:

$docker exec foo tar Ccf $(dirname SRC_PATH) - $(basename SRC_PATH) | tar Cxf DEST_PATH -

Since tar options mean:
  • C -> Change to directory
  • c -> Create a new archive
  • f -> Specific archive file or device
  • x -> Extract archive file or device
ubuntu@node1:~/docker/elasticsearch/backup$ docker exec esearch tar Ccf /usr/share/elasticsearch - config | tar Cxf . -
ubuntu@node1:~/docker/elasticsearch/backup$ 
ubuntu@node1:~/docker/elasticsearch/backup$ docker exec esearch tar Ccf /usr/share/elasticsearch - data | tar Cxf . -
ubuntu@node1:~/docker/elasticsearch/backup$ 
ubuntu@node1:~/docker/elasticsearch/backup$ docker exec esearch tar Ccf /usr/share/elasticsearch - logs | tar Cxf . -
ubuntu@node1:~/docker/elasticsearch/backup$ 
ubuntu@node1:~/docker/elasticsearch/backup$ ls -l
total 12
drwxr-xr-x 3 ubuntu ubuntu 4096 Jun  9 21:58 config
drwxr-xr-x 3 ubuntu ubuntu 4096 Jun 24 12:08 data
drwxr-xr-x 2 ubuntu ubuntu 4096 Jun  9 21:58 logs


Restore

Restore to host directory

ubuntu@node1:~/docker/elasticsearch/backup$ cd ..
ubuntu@node1:~/docker/elasticsearch$ 
ubuntu@node1:~/docker/elasticsearch$ tar xvf backup/config.tar 
config/
config/logging.yml
config/elasticsearch.yml
config/scripts/
ubuntu@node1:~/docker/elasticsearch$ tar xvf backup/data.tar 
data/
data/elasticsearch/
data/elasticsearch/nodes/
data/elasticsearch/nodes/0/
data/elasticsearch/nodes/0/indices/
data/elasticsearch/nodes/0/indices/.kibana/
data/elasticsearch/nodes/0/indices/.kibana/0/
data/elasticsearch/nodes/0/indices/.kibana/0/index/
data/elasticsearch/nodes/0/indices/.kibana/0/index/segments_5
data/elasticsearch/nodes/0/indices/.kibana/0/index/write.lock
data/elasticsearch/nodes/0/indices/.kibana/0/index/_1.cfs
data/elasticsearch/nodes/0/indices/.kibana/0/index/_1.si
data/elasticsearch/nodes/0/indices/.kibana/0/index/_1.cfe
data/elasticsearch/nodes/0/indices/.kibana/0/_state/
data/elasticsearch/nodes/0/indices/.kibana/0/_state/state-3.st
data/elasticsearch/nodes/0/indices/.kibana/0/translog/
data/elasticsearch/nodes/0/indices/.kibana/0/translog/translog-4.ckp
data/elasticsearch/nodes/0/indices/.kibana/0/translog/translog.ckp
data/elasticsearch/nodes/0/indices/.kibana/0/translog/translog-5.tlog
data/elasticsearch/nodes/0/indices/.kibana/0/translog/translog-4.tlog
data/elasticsearch/nodes/0/indices/.kibana/0/translog/translog-3.tlog
data/elasticsearch/nodes/0/indices/.kibana/0/translog/translog-3.ckp
data/elasticsearch/nodes/0/indices/.kibana/_state/
data/elasticsearch/nodes/0/indices/.kibana/_state/state-4.st
data/elasticsearch/nodes/0/indices/bank/
data/elasticsearch/nodes/0/indices/bank/3/
data/elasticsearch/nodes/0/indices/bank/3/index/
data/elasticsearch/nodes/0/indices/bank/3/index/_0.cfs
data/elasticsearch/nodes/0/indices/bank/3/index/_0.cfe
data/elasticsearch/nodes/0/indices/bank/3/index/segments_5
data/elasticsearch/nodes/0/indices/bank/3/index/_0.si
data/elasticsearch/nodes/0/indices/bank/3/index/write.lock
data/elasticsearch/nodes/0/indices/bank/3/_state/
data/elasticsearch/nodes/0/indices/bank/3/_state/state-2.st
data/elasticsearch/nodes/0/indices/bank/3/translog/
data/elasticsearch/nodes/0/indices/bank/3/translog/translog.ckp
data/elasticsearch/nodes/0/indices/bank/3/translog/translog-2.ckp
data/elasticsearch/nodes/0/indices/bank/3/translog/translog-4.tlog
data/elasticsearch/nodes/0/indices/bank/3/translog/translog-2.tlog
data/elasticsearch/nodes/0/indices/bank/3/translog/translog-3.tlog
data/elasticsearch/nodes/0/indices/bank/3/translog/translog-3.ckp
data/elasticsearch/nodes/0/indices/bank/4/
data/elasticsearch/nodes/0/indices/bank/4/index/
data/elasticsearch/nodes/0/indices/bank/4/index/_0.cfs
data/elasticsearch/nodes/0/indices/bank/4/index/_0.cfe
data/elasticsearch/nodes/0/indices/bank/4/index/segments_5
data/elasticsearch/nodes/0/indices/bank/4/index/_0.si
data/elasticsearch/nodes/0/indices/bank/4/index/write.lock
data/elasticsearch/nodes/0/indices/bank/4/_state/
data/elasticsearch/nodes/0/indices/bank/4/_state/state-2.st
data/elasticsearch/nodes/0/indices/bank/4/translog/
data/elasticsearch/nodes/0/indices/bank/4/translog/translog.ckp
data/elasticsearch/nodes/0/indices/bank/4/translog/translog-2.ckp
data/elasticsearch/nodes/0/indices/bank/4/translog/translog-4.tlog
data/elasticsearch/nodes/0/indices/bank/4/translog/translog-2.tlog
data/elasticsearch/nodes/0/indices/bank/4/translog/translog-3.tlog
data/elasticsearch/nodes/0/indices/bank/4/translog/translog-3.ckp
data/elasticsearch/nodes/0/indices/bank/1/
data/elasticsearch/nodes/0/indices/bank/1/index/
data/elasticsearch/nodes/0/indices/bank/1/index/_0.cfs
data/elasticsearch/nodes/0/indices/bank/1/index/_0.cfe
data/elasticsearch/nodes/0/indices/bank/1/index/segments_5
data/elasticsearch/nodes/0/indices/bank/1/index/_0.si
data/elasticsearch/nodes/0/indices/bank/1/index/write.lock
data/elasticsearch/nodes/0/indices/bank/1/_state/
data/elasticsearch/nodes/0/indices/bank/1/_state/state-2.st
data/elasticsearch/nodes/0/indices/bank/1/translog/
data/elasticsearch/nodes/0/indices/bank/1/translog/translog.ckp
data/elasticsearch/nodes/0/indices/bank/1/translog/translog-2.ckp
data/elasticsearch/nodes/0/indices/bank/1/translog/translog-4.tlog
data/elasticsearch/nodes/0/indices/bank/1/translog/translog-2.tlog
data/elasticsearch/nodes/0/indices/bank/1/translog/translog-3.tlog
data/elasticsearch/nodes/0/indices/bank/1/translog/translog-3.ckp
data/elasticsearch/nodes/0/indices/bank/0/
data/elasticsearch/nodes/0/indices/bank/0/index/
data/elasticsearch/nodes/0/indices/bank/0/index/_0.cfs
data/elasticsearch/nodes/0/indices/bank/0/index/_0.cfe
data/elasticsearch/nodes/0/indices/bank/0/index/segments_5
data/elasticsearch/nodes/0/indices/bank/0/index/_0.si
data/elasticsearch/nodes/0/indices/bank/0/index/write.lock
data/elasticsearch/nodes/0/indices/bank/0/_state/
data/elasticsearch/nodes/0/indices/bank/0/_state/state-2.st
data/elasticsearch/nodes/0/indices/bank/0/translog/
data/elasticsearch/nodes/0/indices/bank/0/translog/translog.ckp
data/elasticsearch/nodes/0/indices/bank/0/translog/translog-2.ckp
data/elasticsearch/nodes/0/indices/bank/0/translog/translog-4.tlog
data/elasticsearch/nodes/0/indices/bank/0/translog/translog-2.tlog
data/elasticsearch/nodes/0/indices/bank/0/translog/translog-3.tlog
data/elasticsearch/nodes/0/indices/bank/0/translog/translog-3.ckp
data/elasticsearch/nodes/0/indices/bank/_state/
data/elasticsearch/nodes/0/indices/bank/_state/state-3.st
data/elasticsearch/nodes/0/indices/bank/2/
data/elasticsearch/nodes/0/indices/bank/2/index/
data/elasticsearch/nodes/0/indices/bank/2/index/_0.cfs
data/elasticsearch/nodes/0/indices/bank/2/index/_0.cfe
data/elasticsearch/nodes/0/indices/bank/2/index/segments_5
data/elasticsearch/nodes/0/indices/bank/2/index/_0.si
data/elasticsearch/nodes/0/indices/bank/2/index/write.lock
data/elasticsearch/nodes/0/indices/bank/2/_state/
data/elasticsearch/nodes/0/indices/bank/2/_state/state-2.st
data/elasticsearch/nodes/0/indices/bank/2/translog/
data/elasticsearch/nodes/0/indices/bank/2/translog/translog.ckp
data/elasticsearch/nodes/0/indices/bank/2/translog/translog-2.ckp
data/elasticsearch/nodes/0/indices/bank/2/translog/translog-4.tlog
data/elasticsearch/nodes/0/indices/bank/2/translog/translog-2.tlog
data/elasticsearch/nodes/0/indices/bank/2/translog/translog-3.tlog
data/elasticsearch/nodes/0/indices/bank/2/translog/translog-3.ckp
data/elasticsearch/nodes/0/indices/customer/
data/elasticsearch/nodes/0/indices/customer/3/
data/elasticsearch/nodes/0/indices/customer/3/index/
data/elasticsearch/nodes/0/indices/customer/3/index/_4.cfe
data/elasticsearch/nodes/0/indices/customer/3/index/_4.cfs
data/elasticsearch/nodes/0/indices/customer/3/index/_4.si
data/elasticsearch/nodes/0/indices/customer/3/index/segments_7
data/elasticsearch/nodes/0/indices/customer/3/index/write.lock
data/elasticsearch/nodes/0/indices/customer/3/_state/
data/elasticsearch/nodes/0/indices/customer/3/_state/state-2.st
data/elasticsearch/nodes/0/indices/customer/3/translog/
data/elasticsearch/nodes/0/indices/customer/3/translog/translog-4.ckp
data/elasticsearch/nodes/0/indices/customer/3/translog/translog.ckp
data/elasticsearch/nodes/0/indices/customer/3/translog/translog-5.tlog
data/elasticsearch/nodes/0/indices/customer/3/translog/translog-4.tlog
data/elasticsearch/nodes/0/indices/customer/3/translog/translog-3.tlog
data/elasticsearch/nodes/0/indices/customer/3/translog/translog-3.ckp
data/elasticsearch/nodes/0/indices/customer/4/
data/elasticsearch/nodes/0/indices/customer/4/index/
data/elasticsearch/nodes/0/indices/customer/4/index/segments_4
data/elasticsearch/nodes/0/indices/customer/4/index/write.lock
data/elasticsearch/nodes/0/indices/customer/4/_state/
data/elasticsearch/nodes/0/indices/customer/4/_state/state-2.st
data/elasticsearch/nodes/0/indices/customer/4/translog/
data/elasticsearch/nodes/0/indices/customer/4/translog/translog.ckp
data/elasticsearch/nodes/0/indices/customer/4/translog/translog-2.ckp
data/elasticsearch/nodes/0/indices/customer/4/translog/translog-1.ckp
data/elasticsearch/nodes/0/indices/customer/4/translog/translog-2.tlog
data/elasticsearch/nodes/0/indices/customer/4/translog/translog-3.tlog
data/elasticsearch/nodes/0/indices/customer/4/translog/translog-1.tlog
data/elasticsearch/nodes/0/indices/customer/1/
data/elasticsearch/nodes/0/indices/customer/1/index/
data/elasticsearch/nodes/0/indices/customer/1/index/segments_4
data/elasticsearch/nodes/0/indices/customer/1/index/write.lock
data/elasticsearch/nodes/0/indices/customer/1/_state/
data/elasticsearch/nodes/0/indices/customer/1/_state/state-2.st
data/elasticsearch/nodes/0/indices/customer/1/translog/
data/elasticsearch/nodes/0/indices/customer/1/translog/translog.ckp
data/elasticsearch/nodes/0/indices/customer/1/translog/translog-2.ckp
data/elasticsearch/nodes/0/indices/customer/1/translog/translog-1.ckp
data/elasticsearch/nodes/0/indices/customer/1/translog/translog-2.tlog
data/elasticsearch/nodes/0/indices/customer/1/translog/translog-3.tlog
data/elasticsearch/nodes/0/indices/customer/1/translog/translog-1.tlog
data/elasticsearch/nodes/0/indices/customer/0/
data/elasticsearch/nodes/0/indices/customer/0/index/
data/elasticsearch/nodes/0/indices/customer/0/index/segments_4
data/elasticsearch/nodes/0/indices/customer/0/index/write.lock
data/elasticsearch/nodes/0/indices/customer/0/_state/
data/elasticsearch/nodes/0/indices/customer/0/_state/state-2.st
data/elasticsearch/nodes/0/indices/customer/0/translog/
data/elasticsearch/nodes/0/indices/customer/0/translog/translog.ckp
data/elasticsearch/nodes/0/indices/customer/0/translog/translog-2.ckp
data/elasticsearch/nodes/0/indices/customer/0/translog/translog-1.ckp
data/elasticsearch/nodes/0/indices/customer/0/translog/translog-2.tlog
data/elasticsearch/nodes/0/indices/customer/0/translog/translog-3.tlog
data/elasticsearch/nodes/0/indices/customer/0/translog/translog-1.tlog
data/elasticsearch/nodes/0/indices/customer/_state/
data/elasticsearch/nodes/0/indices/customer/_state/state-4.st
data/elasticsearch/nodes/0/indices/customer/2/
data/elasticsearch/nodes/0/indices/customer/2/index/
data/elasticsearch/nodes/0/indices/customer/2/index/_0.cfs
data/elasticsearch/nodes/0/indices/customer/2/index/_0.cfe
data/elasticsearch/nodes/0/indices/customer/2/index/segments_5
data/elasticsearch/nodes/0/indices/customer/2/index/_0.si
data/elasticsearch/nodes/0/indices/customer/2/index/write.lock
data/elasticsearch/nodes/0/indices/customer/2/index/_1.cfs
data/elasticsearch/nodes/0/indices/customer/2/index/_1.si
data/elasticsearch/nodes/0/indices/customer/2/index/_1.cfe
data/elasticsearch/nodes/0/indices/customer/2/_state/
data/elasticsearch/nodes/0/indices/customer/2/_state/state-2.st
data/elasticsearch/nodes/0/indices/customer/2/translog/
data/elasticsearch/nodes/0/indices/customer/2/translog/translog.ckp
data/elasticsearch/nodes/0/indices/customer/2/translog/translog-2.ckp
data/elasticsearch/nodes/0/indices/customer/2/translog/translog-4.tlog
data/elasticsearch/nodes/0/indices/customer/2/translog/translog-2.tlog
data/elasticsearch/nodes/0/indices/customer/2/translog/translog-3.tlog
data/elasticsearch/nodes/0/indices/customer/2/translog/translog-3.ckp
data/elasticsearch/nodes/0/_state/
data/elasticsearch/nodes/0/_state/global-3.st
data/elasticsearch/nodes/0/node.lock
ubuntu@node1:~/docker/elasticsearch$ tar xvf backup/logs.tar 
logs/
ubuntu@node1:~/docker/elasticsearch$ ls -lt
total 20
drwxrwxr-x 5 ubuntu ubuntu 4096 Jun 24 19:52 backup
drwxr-xr-x 3 ubuntu ubuntu 4096 Jun 24 12:08 data
-rw-rw-r-- 1 ubuntu ubuntu 1760 Jun 22 10:25 Dockerfile
drwxr-xr-x 3 ubuntu ubuntu 4096 Jun  9 21:58 config
drwxr-xr-x 2 ubuntu ubuntu 4096 Jun  9 21:58 logs

Run an elasticsearch container


ubuntu@node1:~/docker/elasticsearch/config$ docker run \
--volume=/home/ubuntu/docker/elasticsearch/logs:/usr/share/elasticsearch/logs:rw \
--volume=/home/ubuntu/docker/elasticsearch/data:/usr/share/elasticsearch/data:rw \
--volume=/home/ubuntu/docker/elasticsearch/config:/usr/share/elasticsearch/config:rw \
--restart=always --name esearch -d -p 10.0.2.41:9200:9200 -p 10.0.2.41:9300:9300 \
--net="my-net" --mac-address="7a:07:51:53:31:21" --add-host="esearch:172.28.5.1" --ip="172.28.5.1" elasticsearch:2.3.3
11b8fd312d324364c1169f0c4891e84180ccc7d06361c4764b4bfbd75cb72e23
ubuntu@node1:~/docker/elasticsearch/config$ 
ubuntu@node1:~/docker/elasticsearch/config$ docker ps
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS                                                NAMES
11b8fd312d32        elasticsearch:2.3.3   "/docker-entrypoint.s"   10 seconds ago      Up 8 seconds        10.0.2.41:9200->9200/tcp, 10.0.2.41:9300->9300/tcp   esearch
d353dbbe21d3        kibana:4.5.1          "/docker-entrypoint.s"   8 hours ago         Up About an hour    10.0.2.41:5601->5601/tcp                             kibana-es
ubuntu@node1:~/docker/elasticsearch/config$ docker exec -it esearch bash
root@11b8fd312d32:/usr/share/elasticsearch# 
root@11b8fd312d32:/usr/share/elasticsearch# ls -lt
total 44
drwxr-xr-x 3 elasticsearch elasticsearch 4096 Jun 24 05:08 data
drwxr-xr-x 3          1000          1000 4096 Jun  9 14:58 config
drwxr-xr-x 2          1000          1000 4096 Jun  9 14:58 logs
drwxr-xr-x 2 root          root          4096 Jun  9 14:58 bin
drwxr-xr-x 2 root          root          4096 Jun  9 14:58 lib
drwxr-xr-x 5 root          root          4096 Jun  9 14:58 modules
-rw-r--r-- 1 root          root           150 May 17 15:48 NOTICE.txt
-rw-r--r-- 1 root          root          8700 May 17 15:48 README.textile
drwxr-xr-x 2 elasticsearch elasticsearch 4096 May 17 15:48 plugins

Enjoy your elasticseach with saved data after stop and start containers. A data and configuration still on local directory of docker host.



Good luck ;-)

Wednesday, June 22, 2016

Docker test drive (v1.11)



Detached mode =>

docker run \
 --name esearch \
 -d \
 -p 10.0.2.41:9200:9200 \
 -p 10.0.2.41:9300:9300 \
 elasticsearch:2.3.3

docker logs -f esearch


Foreground mode =>

docker run \
 --name esearch-fg \
 -i -t \
 -p 10.0.2.41:9200:9200 \
 -p 10.0.2.41:9300:9300 \
 elasticsearch:2.3.3 \
/bin/bash


Create user defined network

$docker network create \
--driver=bridge \
--subnet=172.28.0.0/16 \
--ip-range=172.28.5.0/24 \
--gateway=172.28.5.254 \
my-net

$docker network ls
$docker network inspect my-net


Setting IP address only on user defined network
...
--net="my-net"
--add-host="esearch:172.28.5.1"
--mac-address="7a:07:51:53:31:21"       # MAC Address Generator
--ip="172.28.5.1"
...

Restart policies (--restart)

--restart=no                         # default
--restart=always                  # always restart especially on docker engine starting
--restart=unless-stopped     # if normal stop, do not restart
--restart=on-failure:<max retry>


Cleanup after exit

-rm



Override Dockerfile at run-time
*** But cannot be overridden FROM, MAINTAINER, RUN, and ADD command

CMD (default command or options)
Ex.

ubuntu@node1:~$ docker run --rm ubuntu /bin/bash -c "pwd;ls -l"
/
total 64
drwxr-xr-x   2 root root 4096 May 25 23:11 bin
drwxr-xr-x   2 root root 4096 Apr 12 20:14 boot
drwxr-xr-x   5 root root  360 Jun 22 08:18 dev
drwxr-xr-x  45 root root 4096 Jun 22 08:18 etc
drwxr-xr-x   2 root root 4096 Apr 12 20:14 home
drwxr-xr-x   8 root root 4096 Sep 13  2015 lib
drwxr-xr-x   2 root root 4096 May 25 23:11 lib64
drwxr-xr-x   2 root root 4096 May 25 23:11 media
drwxr-xr-x   2 root root 4096 May 25 23:11 mnt
drwxr-xr-x   2 root root 4096 May 25 23:11 opt
dr-xr-xr-x 118 root root    0 Jun 22 08:18 proc
drwx------   2 root root 4096 May 25 23:11 root
drwxr-xr-x   5 root root 4096 May 25 23:11 run
drwxr-xr-x   2 root root 4096 May 27 14:14 sbin
drwxr-xr-x   2 root root 4096 May 25 23:11 srv
dr-xr-xr-x  13 root root    0 Jun 22 08:18 sys
drwxrwxrwt   2 root root 4096 May 25 23:11 tmp
drwxr-xr-x  11 root root 4096 May 27 14:14 usr
drwxr-xr-x  13 root root 4096 May 27 14:14 var


ENTRYPOINT (default command to execute at runtime)
Ex.

ubuntu@node1:~$ docker run --rm --entrypoint /bin/bash ubuntu -c "pwd; ls -l"
/
total 64
drwxr-xr-x   2 root root 4096 May 25 23:11 bin
drwxr-xr-x   2 root root 4096 Apr 12 20:14 boot
drwxr-xr-x   5 root root  380 Jun 22 08:23 dev
drwxr-xr-x  45 root root 4096 Jun 22 08:23 etc
drwxr-xr-x   2 root root 4096 Apr 12 20:14 home
drwxr-xr-x   8 root root 4096 Sep 13  2015 lib
drwxr-xr-x   2 root root 4096 May 25 23:11 lib64
drwxr-xr-x   2 root root 4096 May 25 23:11 media
drwxr-xr-x   2 root root 4096 May 25 23:11 mnt
drwxr-xr-x   2 root root 4096 May 25 23:11 opt
dr-xr-xr-x 119 root root    0 Jun 22 08:23 proc
drwx------   2 root root 4096 May 25 23:11 root
drwxr-xr-x   5 root root 4096 May 25 23:11 run
drwxr-xr-x   2 root root 4096 May 27 14:14 sbin
drwxr-xr-x   2 root root 4096 May 25 23:11 srv
dr-xr-xr-x  13 root root    0 Jun 22 08:23 sys
drwxrwxrwt   2 root root 4096 May 25 23:11 tmp
drwxr-xr-x  11 root root 4096 May 27 14:14 usr
drwxr-xr-x  13 root root 4096 May 27 14:14 var

EXPOSE (incoming ports/links)
Ex.

ubuntu@node1:~$ docker run --restart=always --name esearch -d -p 10.0.2.41:9200:9200 -p 10.0.2.41:9300:9300 --net="my-net" --mac-address="7a:07:51:53:31:21" --add-host="esearch:172.28.5.1" --ip="172.28.5.1" elasticsearch:2.3.3
f49f048b1fd6f8060c54e8307dcc921ded1d9cf2d1bd7f0e9cc5d4a6f08ddf26

*** Warning: Before running kibana container an elastic search docker container should be started from previous step aboved
*** and set kibana in user defined network as same as elastic search like this:
...
--net="my-net"
--add-host="esearch:172.28.5.1"
--mac-address="02:42:ac:1c:05:00"
--ip="172.28.5.2"
...

ubuntu@node1:~$ docker run --restart=always --name kibana-es --net="my-net" --add-host="esearch:172.28.5.1" --mac-address="02:42:ac:1c:05:00" --ip="172.28.5.2" --link esearch:elasticsearch -p 10.0.2.41:5601:5601 -d kibana:4.5.1
b9203b89184f463625b2ade973a23812436b7242a01f831ca2fb46b084b3dc37



ENV (environment variables)
Ex.

ubuntu@node1:/$ docker run -e "ORACLE_SID=orcl" -e "ORACLE_HOME=/home/oracle" --rm ubuntu /bin/bash -c 'echo $ORACLE_SID at $ORACLE_HOME'
orcl at /home/oracle
ubuntu@node1:/$ docker run -e "ORACLE_SID=orcl" -e "ORACLE_HOME=/home/oracle" --rm ubuntu /bin/bash -c export
declare -x HOME="/root"
declare -x HOSTNAME="d53267da603e"
declare -x OLDPWD
declare -x ORACLE_HOME="/home/oracle"
declare -x ORACLE_SID="orcl"
declare -x PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
declare -x PWD="/"
declare -x SHLVL="1"

TMPFS (mount tmpfs filesystems)
Ex.

ubuntu@node1:/$ docker run -it --tmpfs /run:rw,noexec,nosuid,size=65536k ubuntu /bin/bash
root@6f916416c2fb:/# df -k
Filesystem     1K-blocks    Used Available Use% Mounted on
none            20510312 4999012  14446396  26% /
tmpfs             508852       0    508852   0% /dev
tmpfs             508852       0    508852   0% /sys/fs/cgroup
/dev/vda1       20510312 4999012  14446396  26% /etc/hosts
shm                65536       0     65536   0% /dev/shm
tmpfs              65536       0     65536   0% /run

VOLUME (shared filesystems)
Ex.


ubuntu@node1:/$ docker run -d  -P --name web -v /webapp training/webapp python app.py
b6797f0526869d23fdd510ced79bfb5987a2f2488b4cd41b295e6f0d02391a0f

or detail see here

USER (default user in container is root)
Ex. default Ubuntu image do not have user name "nutt"


ubuntu@node1:/$ docker run --rm -it --user="nutt:nutt"  ubuntu /bin/bash
docker: Error response from daemon: linux spec user: unable to find user nutt: no matching entries in passwd file.
                                                                                                                  ubuntu@node1:/$ 
ubuntu@node1:/$ docker run -it --name test ubuntu /bin/bash
root@2e6c236c261a:/# groupadd nutt
root@2e6c236c261a:/# useradd -g nutt nutt
root@2e6c236c261a:/# passwd nutt
Enter new UNIX password: 
Retype new UNIX password: 
passwd: password updated successfully
root@2e6c236c261a:/# su - nutt
No directory, logging in with HOME=/
$ pwd
/
$ id
uid=1000(nutt) gid=1000(nutt) groups=1000(nutt)
$ exit
logout
root@2e6c236c261a:/# exit
exit
ubuntu@node1:/$ docker ps -a
CONTAINER ID        IMAGE                 COMMAND                  CREATED              STATUS                         PORTS                                                NAMES
2e6c236c261a        ubuntu                "/bin/bash"              About a minute ago   Exited (0) 47 seconds ago                                                           test
6f916416c2fb        ubuntu                "/bin/bash"              25 minutes ago       Exited (0) 24 minutes ago                                                           tender_heisenberg
5ac0ccac94d6        ubuntu                "/bin/bash"              25 minutes ago       Exited (0) 25 minutes ago                                                           distracted_galileo
b55279b6855d        ubuntu                "/bin/bash"              26 minutes ago       Exited (0) 26 minutes ago                                                           adoring_colden
b9203b89184f        kibana:4.5.1          "/docker-entrypoint.s"   51 minutes ago       Up 51 minutes                  10.0.2.41:5601->5601/tcp                             kibana-es
7b11297d9624        kibana                "/docker-entrypoint.s"   About an hour ago    Created                                                                             elated_kare
a5ee1e2b9cb3        kibana                "/docker-entrypoint.s"   About an hour ago    Created                                                                             angry_hopper
f49f048b1fd6        elasticsearch:2.3.3   "/docker-entrypoint.s"   About an hour ago    Up About an hour               10.0.2.41:9200->9200/tcp, 10.0.2.41:9300->9300/tcp   esearch
8b6a6f326d46        ubuntu                "/bin/bash -c 'pwd; l"   About an hour ago    Exited (0) About an hour ago                                                        sleepy_panini
2054b3f0bb24        redis                 "/bin/bash -c 'pwd; l"   About an hour ago    Exited (0) About an hour ago                                                        sleepy_jepsen
d99d05c6d6f5        redis                 "/bin/bash -c ls -l"     About an hour ago    Exited (0) About an hour ago                                                        sad_blackwell
ubuntu@node1:/$ 
ubuntu@node1:/$ docker commit test ubuntu:nutt
sha256:363a03f7553ea80d4caa06ad5815d53658577ac0b1ff840c0902552afc85d8d6
ubuntu@node1:/$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
ubuntu              nutt                363a03f7553e        6 seconds ago       122.3 MB
myhtop              1.0                 681356613e26        4 hours ago         7.418 MB
redis               latest              4465e4bcad80        4 days ago          185.7 MB
kibana              4.5.1               298836bc4170        12 days ago         306.6 MB
kibana              latest              298836bc4170        12 days ago         306.6 MB
java                8-jre               76fd51ceaa2e        12 days ago         312.2 MB
elasticsearch       2.3.3               15930a3e11bf        12 days ago         346.6 MB
elasticsearch       latest              15930a3e11bf        12 days ago         346.6 MB
hello-world         latest              693bce725149        2 weeks ago         967 B
alpine              latest              f70c828098f5        2 weeks ago         4.799 MB
centos              7                   904d6c400333        2 weeks ago         196.8 MB
ubuntu              16.04               2fa927b5cdd3        3 weeks ago         122 MB
ubuntu              latest              2fa927b5cdd3        3 weeks ago         122 MB
oraclelinux         7.2                 df602a268e64        6 weeks ago         276.2 MB
busybox             latest              47bcc53f74dc        3 months ago        1.113 MB
training/webapp     latest              6fae60ef3446        13 months ago       348.8 MB
ubuntu@node1:/$ docker run --rm -it --user="nutt:nutt"  ubuntu:nutt /bin/bash
nutt@eaae30b33086:/$ id
uid=1000(nutt) gid=1000(nutt) groups=1000(nutt)
nutt@eaae30b33086:/$ exit
exit

WORKDIR
Ex.

ubuntu@node1:/$ docker run --rm -it --user="nutt:nutt" -w="/var/log"  ubuntu:nutt /bin/bash
nutt@9980685d91ef:/var/log$ exit
exit






Good luck ;-)