Starting historyserver, logging to /usr/ local/hadoop/logs/mapred-historyserver-druid-hadoop-demo.out Localhost: starting nodemanager, logging to /usr/ local/hadoop/logs/yarn-root-nodemanager-druid-hadoop-demo.out Starting resourcemanager, logging to /usr/ local/hadoop/logs/yarn-resourcemanager-druid-hadoop-demo.out using builtin-java classes where applicable Localhost: starting datanode, logging to /usr/ local/hadoop/logs/hadoop-root-datanode-druid-hadoop-demo.outĠ.0.0.0: starting secondarynamenode, logging to /usr/ local/hadoop/logs/hadoop-root-secondarynamenode-druid-hadoop-demo.outġ8/07/26 17:27:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform. Starting namenodes on ĭruid-hadoop-demo: starting namenode, logging to /usr/ local/hadoop/logs/hadoop-root-namenode-druid-hadoop-demo.out Once the container is started, your terminal will attach to a bash shell running inside the container: Starting sshd: ġ8/07/26 17:27:15 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform. Once the /tmp/shared folder has been created and the etc/hosts entry has been added, run the following command to start the Hadoop container. On the host machine, add the following entry to /etc/hosts: 127.0.0.1 druid-hadoop-demo Let's create some folders under /tmp, we will use these later when starting the Hadoop container: mkdir -p /tmp/shared We'll need a shared folder between the host and the Hadoop container for transferring some files. Setup the Hadoop docker cluster Create temporary shared directory Once the image build is done, you should see the message Successfully tagged druid-hadoop-demo:2.8.5 printed to the console. This will start building the Hadoop image. This Dockerfile and related files are located at quickstart/tutorial/hadoop/docker.įrom the apache-druid-0.23.0 package root, run the following commands to build a Docker image named "druid-hadoop-demo" with version tag "2.8.5": cd quickstart/tutorial/hadoop/dockerĭocker build -t druid-hadoop-demo:2.8.5. Build the Hadoop docker imageįor this tutorial, we've provided a Dockerfile for a Hadoop 2.8.5 cluster, which we'll use to run the batch indexing task. Once the Docker install is complete, please proceed to the next steps in the tutorial. This tutorial requires Docker to be installed on the tutorial machine. Micro-quickstart single-machine configuration as described in the quickstart. This tutorial shows you how to load data files into Apache Druid using a remote Hadoop cluster.įor this tutorial, we'll assume that you've already completed the previousīatch ingestion tutorial using Druid's native batch ingestion system and are using the Moment Sketches for Approximate Quantiles module.Key/Value Stores (HBase/Cassandra/OpenTSDB)
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |