Skip to content

Commit fdae47c

Browse files
authored
Merge pull request #5 from solsson/kafka-011-debian
Kafka 0.11 on debian:stretch
2 parents 0e48c4b + 0314080 commit fdae47c

File tree

13 files changed

+466
-16
lines changed

13 files changed

+466
-16
lines changed

README.md

Lines changed: 56 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,62 @@
1-
# dockerfiles
1+
# Kafka docker builds
2+
3+
Automated [Kafka](http://kafka.apache.org/) builds for [solsson/kafka](https://hub.docker.com/r/solsson/kafka/)
4+
and related `kafka-` images under https://hub.docker.com/u/solsson/.
25

3-
Nowadays we're using separate repositories for dockerization projects.
6+
---
7+
8+
This repo used to contain misc dockerfiles, but they've moved to separate repositories for dockerization projects.
9+
We've kept the repository name to avoid breaking the automated build of solsson/kafka in Docker Hub.
410

511
For legacy Dockerfiles from this repo (if you navigated to here from a Docker Hub [solsson](https://hub.docker.com/u/solsson/) image),
612
see https://github.com/solsson/dockerfiles/tree/misc-dockerfiles.
713

8-
# Kafka docker builds
14+
---
15+
16+
Our kafka images are tested in production with https://github.com/Yolean/kubernetes-kafka/.
17+
18+
You most likely need to mount your own config files, or for `./bin/kafka-server-start.sh` use overrides like:
19+
```
20+
--override zookeeper.connect=zookeeper:2181
21+
--override log.dirs=/var/lib/kafka/data/topics
22+
--override log.retention.hours=-1
23+
--override broker.id=0
24+
--override advertised.listener=PLAINTEXT://kafka-0:9092
25+
```
26+
27+
## One image to rule them all
28+
29+
Official [Kafka distributions](http://kafka.apache.org/downloads) contain startup scripts and config for various services and clients. Thus `./kafka` produces a multi-purpose image for direct use and specialized docker builds.
30+
31+
We could build specialized images like `kafka-server` but we have two reasons not to:
32+
* Won't be as transparent in Docker Hub because you can't use Automated Build without scripting.
33+
* In reality you'll need to control your own config anyway.
34+
35+
### Example of downstream image: Kafka Connect
36+
37+
See ./connect-jmx
38+
39+
### Example downstream image: Kafka Streams
40+
41+
TODO
42+
43+
## Building
44+
45+
Rudimentary compliance with kubernetes-kafka is tested using a [build-contract](https://github.com/Yolean/build-contract/).
46+
47+
Build and test using: `docker run -v /var/run/docker.sock:/var/run/docker.sock -v $(pwd)/:/source solsson/build-contract test`. However... while timing issues remain you need some manual intervention:
948

10-
This repository maintains automated [Kafka](http://kafka.apache.org/) builds for https://hub.docker.com/r/solsson/kafka/
11-
and related `kafka-` images under https://hub.docker.com/u/solsson/, used with https://github.com/Yolean/kubernetes-kafka/.
49+
```bash
50+
compose='docker-compose -f build-contracts/docker-compose.yml'
51+
$compose up -d zookeeper kafka-0
52+
$compose logs zookeeper kafka-0
53+
# can we create topics using the image's provided script?
54+
$compose up test-topic-create-1
55+
# can a producer send messages using snappy (has issues before with a class missing in the image)
56+
$compose up test-snappy-compression
57+
$compose up test-consume-all
58+
# demo the log/file aggregation image
59+
docker-compose -f build-contracts/docker-compose.files-aggregation.yml up
60+
# demo the JMX->kafka image
61+
docker-compose -f build-contracts/docker-compose.monitoring.yml up
62+
```
Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
version: '2.0'
2+
services:
3+
4+
zookeeper:
5+
build: ../kafka
6+
entrypoint: ./bin/zookeeper-server-start.sh
7+
command:
8+
- config/zookeeper.properties
9+
10+
kafka-0:
11+
build: ../kafka
12+
links:
13+
- zookeeper
14+
entrypoint: ./bin/bin/kafka-server-start.sh
15+
command:
16+
- config/server.properties
17+
- --override
18+
- zookeeper.connect=zookeeper:2181
19+
- --override
20+
- broker.id=0
21+
- --override
22+
- advertised.listener=PLAINTEXT://kafka-0:9092
23+
24+
connect-files:
25+
build: ../connect-files
26+
labels:
27+
com.yolean.build-target: ""
28+
links:
29+
- kafka-0
30+
31+
test-connect-files-real-logs:
32+
build: ../connect-files
33+
links:
34+
- kafka-0
35+
volumes:
36+
- /var/log:/logs
37+
38+
test-consume-files:
39+
image: solsson/kafkacat@sha256:1266d140c52cb39bf314b6f22b6d7a01c4c9084781bc779fdfade51214a713a8
40+
labels:
41+
com.yolean.build-contract: ""
42+
command:
43+
- -b
44+
- kafka-0:9092
45+
- -t
46+
- files-000
47+
- -C
48+
- -o
49+
- beginning
Lines changed: 77 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,77 @@
1+
version: '2.0'
2+
services:
3+
4+
zookeeper:
5+
build: ../kafka
6+
entrypoint: ./bin/zookeeper-server-start.sh
7+
command:
8+
- config/zookeeper.properties
9+
10+
kafka-0:
11+
build: ../kafka
12+
links:
13+
- zookeeper
14+
environment:
15+
- JMX_PORT=5555
16+
expose:
17+
- '5555'
18+
entrypoint: ./bin/bin/kafka-server-start.sh
19+
command:
20+
- config/server.properties
21+
- --override
22+
- zookeeper.connect=zookeeper:2181
23+
- --override
24+
- broker.id=0
25+
- --override
26+
- advertised.listener=PLAINTEXT://kafka-0:9092
27+
28+
prometheus-jmx-exporter:
29+
build: ../prometheus-jmx-exporter
30+
labels:
31+
com.yolean.build-target: ""
32+
links:
33+
- kafka-0
34+
# patch a config before start, as the image is designed for use with local JMX (same k8s pod)
35+
entrypoint: /bin/bash
36+
command:
37+
- -c
38+
- >
39+
sed -i 's|127.0.0.1|kafka-0|' example_configs/kafka-prometheus-monitoring.yml;
40+
cat example_configs/kafka-prometheus-monitoring.yml;
41+
java -jar jmx_prometheus_httpserver.jar
42+
5556 example_configs/kafka-prometheus-monitoring.yml
43+
44+
test-metrics-export:
45+
image: solsson/curl@sha256:8b0927b81d10043e70f3e05e33e36fb9b3b0cbfcbccdb9f04fd53f67a270b874
46+
labels:
47+
com.yolean.build-contract: ""
48+
command:
49+
- --fail-early
50+
- --retry
51+
- '10'
52+
- --retry-delay
53+
- '3'
54+
- --retry-connrefused
55+
- http://prometheus-jmx-exporter:5556/metrics
56+
57+
connect-jmx:
58+
build: ../connect-jmx
59+
labels:
60+
com.yolean.build-target: ""
61+
links:
62+
- kafka-0
63+
64+
# TODO starts too fast, gets % KC_ERROR: Failed to query metadata for topic jmx-test: Local: Broker transport failure
65+
# needs to retry until kafka+topic exists
66+
test-jmx:
67+
image: solsson/kafkacat@sha256:1266d140c52cb39bf314b6f22b6d7a01c4c9084781bc779fdfade51214a713a8
68+
labels:
69+
com.yolean.build-contract: ""
70+
command:
71+
- -b
72+
- kafka-0:9092
73+
- -t
74+
- jmx-test
75+
- -C
76+
- -o
77+
- beginning

build-contracts/docker-compose.yml

Lines changed: 69 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
version: '2.0'
2+
services:
3+
4+
zookeeper:
5+
build: ../kafka
6+
entrypoint: ./bin/zookeeper-server-start.sh
7+
command:
8+
- config/zookeeper.properties
9+
10+
kafka-0:
11+
build: ../kafka
12+
image: solsson/kafka
13+
labels:
14+
com.yolean.build-target: ""
15+
links:
16+
- zookeeper
17+
entrypoint: ./bin/kafka-server-start.sh
18+
command:
19+
- config/server.properties
20+
- --override
21+
- zookeeper.connect=zookeeper:2181
22+
- --override
23+
- broker.id=0
24+
# unlike Kubernetes StatefulSet, compose gives containers a random hostname (leading to redirects to a hex name)
25+
- --override
26+
- advertised.listener=PLAINTEXT://kafka-0:9092
27+
28+
test-topic-create:
29+
build: ../kafka
30+
labels:
31+
com.yolean.build-contract: ""
32+
links:
33+
- kafka-0
34+
entrypoint: ./bin/kafka-topics.sh
35+
command:
36+
- --zookeeper
37+
- zookeeper:2181
38+
- --create
39+
- --topic
40+
- test-topic-create
41+
- --partitions
42+
- '1'
43+
- --replication-factor
44+
- '1'
45+
46+
test-snappy-compression:
47+
image: solsson/kafkacat@sha256:1266d140c52cb39bf314b6f22b6d7a01c4c9084781bc779fdfade51214a713a8
48+
labels:
49+
com.yolean.build-contract: ""
50+
entrypoint: /bin/sh
51+
command:
52+
- -exc
53+
- sleep 5; echo "Message from $${HOSTNAME} at $$(date)" | kafkacat -z snappy -b kafka-0:9092 -t test1 -P
54+
55+
# TODO starts too fast, gets % KC_ERROR: Failed to query metadata for topic test1: Local: Broker transport failure
56+
# needs to retry until kafka+topic exists
57+
test-consume-all:
58+
image: solsson/kafkacat@sha256:1266d140c52cb39bf314b6f22b6d7a01c4c9084781bc779fdfade51214a713a8
59+
labels:
60+
com.yolean.build-contract: ""
61+
command:
62+
- -b
63+
- kafka-0:9092
64+
- -t
65+
- test1
66+
- -C
67+
- -o
68+
- beginning
69+
- -e

connect-files/Dockerfile

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
FROM solsson/kafka:0.11.0.0
2+
3+
COPY worker.properties ./config/
4+
COPY connect-files.sh ./bin/
5+
6+
ENV FILES_LIST_CMD="find /logs/ -name *.log"
7+
8+
# Set up some sample logs
9+
RUN mkdir /logs/; \
10+
echo "Mount /logs and/or change FILES_LIST_CMD (currently '$FILES_LIST_CMD') to read real content instead" > /logs/samplefile1.log;
11+
12+
ENTRYPOINT ["./bin/connect-files.sh"]

connect-files/connect-files.sh

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
#!/bin/bash
2+
set -e
3+
4+
FILES=$($FILES_LIST_CMD)
5+
6+
id=0
7+
connectors=""
8+
for FILE in $FILES; do
9+
((++id))
10+
echo "$id: $FILE"
11+
cat <<HERE > ./config/connect-file-source-$id.properties
12+
name=local-file-source-${id}
13+
connector.class=FileStreamSource
14+
tasks.max=1
15+
file=${FILE}
16+
topic=files-000
17+
HERE
18+
19+
connectors="$connectors ./config/connect-file-source-$id.properties"
20+
done
21+
22+
./bin/connect-standalone.sh ./config/worker.properties $connectors

connect-files/worker.properties

Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
# Licensed to the Apache Software Foundation (ASF) under one or more
2+
# contributor license agreements. See the NOTICE file distributed with
3+
# this work for additional information regarding copyright ownership.
4+
# The ASF licenses this file to You under the Apache License, Version 2.0
5+
# (the "License"); you may not use this file except in compliance with
6+
# the License. You may obtain a copy of the License at
7+
#
8+
# http://www.apache.org/licenses/LICENSE-2.0
9+
#
10+
# Unless required by applicable law or agreed to in writing, software
11+
# distributed under the License is distributed on an "AS IS" BASIS,
12+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13+
# See the License for the specific language governing permissions and
14+
# limitations under the License.
15+
16+
# These are defaults. This file just demonstrates how to override some settings.
17+
bootstrap.servers=kafka-0:9092
18+
19+
# The converters specify the format of data in Kafka and how to translate it into Connect data. Every Connect user will
20+
# need to configure these based on the format they want their data in when loaded from or stored into Kafka
21+
key.converter=org.apache.kafka.connect.json.JsonConverter
22+
value.converter=org.apache.kafka.connect.json.JsonConverter
23+
# Converter-specific settings can be passed in by prefixing the Converter's setting with the converter we want to apply
24+
# it to
25+
key.converter.schemas.enable=true
26+
value.converter.schemas.enable=true
27+
28+
# The internal converter used for offsets and config data is configurable and must be specified, but most users will
29+
# always want to use the built-in default. Offset and config data is never visible outside of Kafka Connect in this format.
30+
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
31+
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
32+
internal.key.converter.schemas.enable=false
33+
internal.value.converter.schemas.enable=false
34+
35+
offset.storage.file.filename=/tmp/connect.offsets
36+
# Flush much faster than normal, which is useful for testing/debugging
37+
offset.flush.interval.ms=10000
38+
39+
# Set to a list of filesystem paths separated by commas (,) to enable class loading isolation for plugins
40+
# (connectors, converters, transformations). The list should consist of top level directories that include
41+
# any combination of:
42+
# a) directories immediately containing jars with plugins and their dependencies
43+
# b) uber-jars with plugins and their dependencies
44+
# c) directories immediately containing the package directory structure of classes of plugins and their dependencies
45+
# Note: symlinks will be followed to discover dependencies or plugins.
46+
# Examples:
47+
# plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors,
48+
#plugin.path=

connect-jmx/Dockerfile

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
FROM solsson/kafka:0.11.0.0
2+
3+
ENV srijiths-kafka-connectors-version=dc0a7122650e697d3ae97c970a4785bbed949479
4+
5+
RUN set -ex; \
6+
buildDeps='curl ca-certificates'; \
7+
apt-get update && apt-get install -y $buildDeps --no-install-recommends; \
8+
\
9+
MAVEN_VERSION=3.5.0 PATH=$PATH:$(pwd)/maven/bin; \
10+
mkdir ./maven; \
11+
curl -SLs https://archive.apache.org/dist/maven/maven-3/$MAVEN_VERSION/binaries/apache-maven-$MAVEN_VERSION-bin.tar.gz | tar -xzf - --strip-components=1 -C ./maven; \
12+
mvn --version; \
13+
\
14+
mkdir ./kafka-connectors; \
15+
cd ./kafka-connectors; \
16+
curl -SLs https://github.com/srijiths/kafka-connectors/archive/$srijiths-kafka-connectors-version.tar.gz \
17+
| tar -xzf - --strip-components=1 -C ./; \
18+
mvn clean install; \
19+
cd ..; \
20+
mv ~/.m2/repository/com/sree/kafka/kafka-connect-jmx/0.0.1/kafka-connect-jmx-0.0.1-jar-with-dependencies.jar ./libs/; \
21+
rm -rf ./kafka-connectors; \
22+
rm -rf ./maven ~/.m2; \
23+
\
24+
apt-get purge -y --auto-remove $buildDeps; \
25+
rm -rf /var/lib/apt/lists/*; \
26+
rm /var/log/dpkg.log /var/log/apt/*.log
27+
28+
COPY *.properties ./config/
29+
30+
ENTRYPOINT ["./bin/connect-standalone.sh"]
31+
CMD ["./config/worker.properties", "./config/connect-jmx.properties"]

0 commit comments

Comments
 (0)