Skip to content

Commit 28f16aa

Browse files
author
Luis Sanchez
committed
[FAB-931] Add multi-broker Kafka orderer environments
Subtask of FAB-890 - Created bddtests/environments/ directory structure. This makes is easier to compose environments that dependon on more than just a single docker-compose.yml file. - Added docker-compose environments for Kafka orderer with multiple brokers: kafka: Source for building the Kafka docker image used by other environments. orderer-1-kafka-1: 1 Kafka orderer node, 1 Kafka broker. orderer-1-kafka-3: 1 Kafka orderer node, 3 Kafka brokers. orderer-n-kafka-n: Experimental environment where the orderer and kafka services can be scaled using: $ docker-compose scale kafka=n $ docker-compose scale orderer=m - Changed orderer.feature to use orderer-kafka-1 instead of docker-compose-orderer-kafka.yml. - Added scenario outline examples to orderer.feature to run against orderer-1-kafka-3, but they are disabled in .behaverc, as they don't all succeed. - Modified the "we compose" step definition to accept a directory in addition to .yml files. - Modified `make behave-deps` to build environments. (did not add orderer-n-kafka-n since it is not used at the moment) - Converted to Docker Compose v2 file format. - Fixed Composition.rebuildContainerData() to work with Docker Compose v2 environments. Change-Id: I84d2ec959cd1d02e36293c86f05b804254e530ff Signed-off-by: Luis Sanchez <[email protected]>
1 parent 1a2bdb4 commit 28f16aa

File tree

15 files changed

+381
-35
lines changed

15 files changed

+381
-35
lines changed

Makefile

+7-1
Original file line numberDiff line numberDiff line change
@@ -140,7 +140,13 @@ unit-tests: unit-test
140140
docker: $(patsubst %,build/image/%/.dummy, $(IMAGES))
141141
native: peer orderer
142142

143-
behave-deps: docker peer build/bin/block-listener
143+
BEHAVE_ENVIRONMENTS = kafka orderer-1-kafka-1 orderer-1-kafka-3
144+
.PHONY: behave-environments $(BEHAVE_ENVIRONMENTS)
145+
behave-environments: $(BEHAVE_ENVIRONMENTS)
146+
$(BEHAVE_ENVIRONMENTS):
147+
@docker-compose --file bddtests/environments/$@/docker-compose.yml build
148+
149+
behave-deps: docker peer build/bin/block-listener behave-environments
144150
behave: behave-deps
145151
@echo "Running behave tests"
146152
@cd bddtests; behave $(BEHAVE_OPTS)

bddtests/.behaverc

+2
Original file line numberDiff line numberDiff line change
@@ -10,3 +10,5 @@ tags=~@issue_767
1010
~@preV1
1111

1212
~@FAB-314
13+
14+
name=^((?!1 Kafka Orderer and 3 Kafka Brokers).)*$
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
docker-compose.xml
+18
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
FROM openjdk:8u111-jre
2+
3+
ENV SCALA_VERSION=2.11 \
4+
KAFKA_VERSION=0.9.0.1 \
5+
KAFKA_DOWNLOAD_SHA1=FC9ED9B663DD608486A1E56197D318C41813D326
6+
7+
RUN curl -fSL "http://www-us.apache.org/dist/kafka/0.9.0.1/kafka_${SCALA_VERSION}-${KAFKA_VERSION}.tgz" -o kafka_${SCALA_VERSION}-${KAFKA_VERSION}.tgz \
8+
&& echo "${KAFKA_DOWNLOAD_SHA1} kafka_${SCALA_VERSION}-${KAFKA_VERSION}.tgz" | sha1sum -c - \
9+
&& tar xfz kafka_"$SCALA_VERSION"-"$KAFKA_VERSION".tgz -C /opt \
10+
&& mv /opt/kafka_"$SCALA_VERSION"-"$KAFKA_VERSION" /opt/kafka \
11+
&& rm kafka_"$SCALA_VERSION"-"$KAFKA_VERSION".tgz
12+
13+
ADD docker-entrypoint.sh /docker-entrypoint.sh
14+
15+
EXPOSE 9092
16+
17+
ENTRYPOINT ["/docker-entrypoint.sh"]
18+
CMD ["/opt/kafka/bin/kafka-server-start.sh"]

bddtests/environments/kafka/README.md

+38
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
# Apache Kafka Docker Image
2+
This image can be used to start Apache Kafka in a Docker container.
3+
4+
Use the provided [`docker-compose.yml`](docker-compose.yml) file as a starting point.
5+
6+
## Usage
7+
#### Start
8+
```console
9+
$ docker-compose up -d
10+
Creating kafka_zookeeper_1
11+
Creating kafka_kafka_1
12+
$
13+
```
14+
#### Scale
15+
```console
16+
$ docker-compose scale kafka=3
17+
Creating and starting kafka_kafka_2 ... done
18+
Creating and starting kafka_kafka_3 ... done
19+
$
20+
```
21+
#### Stop
22+
```console
23+
$ docker-compose stop
24+
Stopping kafka_kafka_3 ... done
25+
Stopping kafka_kafka_2 ... done
26+
Stopping kafka_kafka_1 ... done
27+
Stopping kafka_zookeeper_1 ... done
28+
$
29+
```
30+
## Configuration
31+
Edit the [`docker-compose.yml`](docker-compose.yml) file to configure.
32+
### server.properties
33+
To configure a Kafka server property, add it to the environment section of the Kafka service. Kafka properties map to environment variables as follows:
34+
1. Replace dots with underscores.
35+
2. Change to upper case.
36+
3. Prefix with `KAFKA_`
37+
38+
For example, `default.replication.factor` becomes `KAFKA_DEFAULT_REPLICATION_FACTOR`.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
version: '2'
2+
services:
3+
zookeeper:
4+
# Offical Apache ZooKeeper image. See https://hub.docker.com/_/zookeeper/
5+
image: zookeeper:3.4.9
6+
7+
kafka:
8+
build: .
9+
environment:
10+
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
11+
depends_on:
12+
- zookeeper
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,64 @@
1+
#!/usr/bin/env bash
2+
3+
# This script will either start the kafka server, or run the user
4+
# specified command.
5+
6+
# Exit immediately if a pipeline returns a non-zero status.
7+
set -e
8+
9+
KAFKA_HOME=/opt/kafka
10+
KAFKA_EXE=${KAFKA_HOME}/bin/kafka-server-start.sh
11+
KAFKA_SERVER_PROPERTIES=${KAFKA_HOME}/config/server.properties
12+
13+
# handle starting the kafka server with an option
14+
# (genericly handled, but only --override known to me at this time)
15+
if [ "${1:0:1}" = '-' ]; then
16+
set -- ${KAFKA_EXE} ${KAFKA_SERVER_PROPERTIES} "$@"
17+
fi
18+
19+
# handle default (i.e. no custom options or commands)
20+
if [ "$1" = "${KAFKA_EXE}" ]; then
21+
22+
# add the server.properties to the command
23+
set -- ${KAFKA_EXE} ${KAFKA_SERVER_PROPERTIES}
24+
25+
# compute the advertised host name if a command was specified
26+
if [[ -z ${KAFKA_ADVERTISED_HOST_NAME} && -n ${KAFKA_ADVERTISED_HOST_NAME_COMMAND} ]] ; then
27+
export KAFKA_ADVERTISED_HOST_NAME=$(eval ${KAFKA_ADVERTISED_HOST_NAME_COMMAND})
28+
fi
29+
30+
# compute the advertised port if a command was specified
31+
if [[ -z ${KAFKA_ADVERTISED_PORT} && -n ${KAFKA_ADVERTISED_PORT_COMMAND} ]] ; then
32+
export KAFKA_ADVERTISED_PORT=$(eval ${KAFKA_ADVERTISED_PORT_COMMAND})
33+
fi
34+
35+
# default to auto set the broker id
36+
if [ -z "$KAFKA_BROKER_ID" ] ; then
37+
export KAFKA_BROKER_ID=-1
38+
fi
39+
40+
# update server.properties by searching for envinroment variables named
41+
# KAFKA_* and converting them to properties in the kafka server properties file.
42+
for ENV_ENTRY in $(env | grep "^KAFKA_") ; do
43+
# skip some entries that should do not belong in server.properties
44+
if [[ $ENV_ENTRY =~ ^KAFKA_HOME= ]] ; then continue ; fi
45+
if [[ $ENV_ENTRY =~ ^KAFKA_EXE= ]] ; then continue ; fi
46+
if [[ $ENV_ENTRY =~ ^KAFKA_SERVER_PROPERTIES= ]] ; then continue ; fi
47+
if [[ $ENV_ENTRY =~ ^KAFKA_ADVERTISED_HOST_NAME_COMMAND= ]] ; then continue ; fi
48+
if [[ $ENV_ENTRY =~ ^KAFKA_ADVERTISED_PORT_COMMAND= ]] ; then continue ; fi
49+
# transform KAFKA_XXX_YYY to xxx.yyy
50+
KAFKA_PROPERTY_NAME="$(echo ${ENV_ENTRY%%=*} | sed -e 's/^KAFKA_//;s/_/./g' | tr '[:upper:]' '[:lower:]')"
51+
# get property value
52+
KAFKA_PROPERTY_VALUE="${ENV_ENTRY#*=}"
53+
# update server.properties
54+
if grep -q "^\s*#\?\s*${KAFKA_PROPERTY_NAME}" ${KAFKA_SERVER_PROPERTIES} ; then
55+
# the property is already defined (maybe even commented out), so edit the file
56+
sed -i -e "s|^\s*${KAFKA_PROPERTY_NAME}\s*=.*$|${KAFKA_PROPERTY_NAME}=${KAFKA_PROPERTY_VALUE}|" ${KAFKA_SERVER_PROPERTIES}
57+
sed -i -e "s|^\s*#\s*${KAFKA_PROPERTY_NAME}\s*=.*$|${KAFKA_PROPERTY_NAME}=${KAFKA_PROPERTY_VALUE}|" ${KAFKA_SERVER_PROPERTIES}
58+
else
59+
echo "${KAFKA_PROPERTY_NAME}=${KAFKA_PROPERTY_VALUE}">>${KAFKA_SERVER_PROPERTIES}
60+
fi
61+
done
62+
fi
63+
64+
exec "$@"
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
version: '2'
2+
services:
3+
zookeeper:
4+
# Offical Apache ZooKeeper image. See https://hub.docker.com/_/zookeeper/
5+
image: zookeeper:3.4.9
6+
7+
orderer0:
8+
image: hyperledger/fabric-orderer
9+
environment:
10+
- ORDERER_GENERAL_LEDGERTYPE=ram
11+
- ORDERER_GENERAL_BATCHTIMEOUT=10s
12+
- ORDERER_GENERAL_BATCHSIZE=10
13+
- ORDERER_GENERAL_MAXWINDOWSIZE=1000
14+
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
15+
- ORDERER_RAMLEDGER_HISTORY_SIZE=100
16+
- ORDERER_GENERAL_ORDERERTYPE=kafka
17+
- ORDERER_KAFKA_BROKERS=[kafka0:9092]
18+
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderer
19+
command: orderer -loglevel debug -verbose true
20+
depends_on:
21+
- kafka0
22+
23+
kafka0:
24+
build: ../kafka
25+
environment:
26+
KAFKA_BROKER_ID: 0
27+
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
28+
depends_on:
29+
- zookeeper
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,50 @@
1+
version: '2'
2+
services:
3+
zookeeper:
4+
# Offical Apache ZooKeeper image. See https://hub.docker.com/_/zookeeper/
5+
image: zookeeper:3.4.9
6+
7+
orderer0:
8+
image: hyperledger/fabric-orderer
9+
environment:
10+
- ORDERER_GENERAL_LEDGERTYPE=ram
11+
- ORDERER_GENERAL_BATCHTIMEOUT=10s
12+
- ORDERER_GENERAL_BATCHSIZE=10
13+
- ORDERER_GENERAL_MAXWINDOWSIZE=1000
14+
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
15+
- ORDERER_RAMLEDGER_HISTORY_SIZE=100
16+
- ORDERER_GENERAL_ORDERERTYPE=kafka
17+
- ORDERER_KAFKA_BROKERS=[kafka0:9092,kafka1:9092,kafka2:9092]
18+
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderer
19+
command: orderer -loglevel debug -verbose true
20+
depends_on:
21+
- kafka0
22+
- kafka1
23+
- kafka2
24+
25+
kafka0:
26+
build: ../kafka
27+
environment:
28+
KAFKA_BROKER_ID: 0
29+
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
30+
KAFKA_DEFAULT_REPLICATION_FACTOR: 3
31+
depends_on:
32+
- zookeeper
33+
34+
kafka1:
35+
build: ../kafka
36+
environment:
37+
KAFKA_BROKER_ID: 1
38+
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
39+
KAFKA_DEFAULT_REPLICATION_FACTOR: 3
40+
depends_on:
41+
- zookeeper
42+
43+
kafka2:
44+
build: ../kafka
45+
environment:
46+
KAFKA_BROKER_ID: 2
47+
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
48+
KAFKA_DEFAULT_REPLICATION_FACTOR: 3
49+
depends_on:
50+
- zookeeper
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
# order-n-kafka-n
2+
A scalable kafka orderer environment.
3+
## Starting
4+
5+
While you can start the environment by simply executing `docker-compose -d up`, since the list of kafka brokers is computed dynamically, it is recommended that you start the kafka nodes first, and then start the orderer nodes.
6+
7+
For example, to start an environment with 3 orderer shims and 5 kafka brokers, issue the following commands:
8+
9+
```bash
10+
$ docker-compose up -d zookeeper
11+
$ docker-compose up -d kafka
12+
$ docker-compose scale kafka=5
13+
$ docker-compose up -d orderer
14+
$ docker-compose scale orderer=3
15+
```
16+
17+
## Stopping
18+
19+
While you can stop the environment by simply executing `docker-compose stop`, docker-compose does not enforce a reverse-dependency order on shutdown. For a cleaner shutdown, stop the individual service types independently in the following order:
20+
21+
```bash
22+
$ docker-compose stop orderer
23+
$ docker-compose stop kafka
24+
$ docker-compose stop
25+
```
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
version: '2'
2+
services:
3+
zookeeper:
4+
# Offical Apache ZooKeeper image. See https://hub.docker.com/_/zookeeper/
5+
image: zookeeper:3.4.9
6+
7+
orderer:
8+
build: ./orderer
9+
environment:
10+
- ORDERER_GENERAL_ORDERERTYPE=kafka
11+
depends_on:
12+
- zookeeper
13+
- kafka
14+
command: -loglevel debug -verbose true
15+
16+
kafka:
17+
build: ../kafka
18+
environment:
19+
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
20+
depends_on:
21+
- zookeeper
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
FROM hyperledger/fabric-orderer
2+
ENV ORDERER_GENERAL_LEDGERTYPE=ram \
3+
ORDERER_GENERAL_BATCHTIMEOUT=10s \
4+
ORDERER_GENERAL_BATCHSIZE=10 \
5+
ORDERER_GENERAL_MAXWINDOWSIZE=1000 \
6+
ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 \
7+
ORDERER_GENERAL_LISTENPORT=5005 \
8+
ORDERER_RAMLEDGER_HISTORY_SIZE=100
9+
WORKDIR /opt/gopath/src/github.com/hyperledger/fabric/orderer
10+
ENV ORDERER_GENERAL_ORDERERTYPE=kafka
11+
RUN apt-get update \
12+
&& apt-get install -y zookeeper jq \
13+
&& rm -rf /var/lib/apt/lists/*
14+
ADD docker-entrypoint.sh /docker-entrypoint.sh
15+
ENTRYPOINT ["/docker-entrypoint.sh"]
16+
CMD ["orderer"]
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
#!/usr/bin/env bash
2+
3+
# This script will either start the kafka server, or run the user
4+
# specified command.
5+
6+
# Exit immediately if a pipeline returns a non-zero status.
7+
set -e
8+
9+
ORDERER_EXE=orderer
10+
11+
# handle starting the orderer with an option
12+
if [ "${1:0:1}" = '-' ]; then
13+
set -- ${ORDERER_EXE} "$@"
14+
fi
15+
16+
# handle default (i.e. no custom options or commands)
17+
if [ "$1" = "${ORDERER_EXE}" ]; then
18+
19+
# get the broker list from zookeeper
20+
if [ -z "$ORDERER_KAFKA_BROKERS" ] ; then
21+
if [ -z "$ZOOKEEPER_CONNECT" ] ; then
22+
export ZOOKEEPER_CONNECT="zookeeper:2181"
23+
fi
24+
ZK_CLI_EXE="/usr/share/zookeeper/bin/zkCli.sh -server ${ZOOKEEPER_CONNECT}"
25+
until [ -n "$($ZK_CLI_EXE ls /brokers/ids | grep '^\[')" ] ; do
26+
echo "No Kafka brokers registered in ZooKeeper. Will try again in 1 second."
27+
sleep 1
28+
done
29+
ORDERER_KAFKA_BROKERS="["
30+
ORDERER_KAFKA_BROKERS_SEP=""
31+
for BROKER_ID in $($ZK_CLI_EXE ls /brokers/ids | grep '^\[' | sed 's/[][,]/ /g'); do
32+
ORDERER_KAFKA_BROKERS=${ORDERER_KAFKA_BROKERS}${ORDERER_KAFKA_BROKERS_SEP}$($ZK_CLI_EXE get /brokers/ids/$BROKER_ID 2>&1 | grep '^{' | jq -j '. | .host,":",.port')
33+
ORDERER_KAFKA_BROKERS_SEP=","
34+
done
35+
export ORDERER_KAFKA_BROKERS="${ORDERER_KAFKA_BROKERS}]"
36+
fi
37+
38+
fi
39+
40+
exec "$@"

0 commit comments

Comments
 (0)