Newer instructions for Accumulo 2.0.0 are available
Running on S3 requires a new feature in Accumulo 2.0. Accumulo has a pluggable volume chooser that tells Accumulo which URL a new file should be placed on. In 2.0 this volume chooser was updated to be aware of write ahead logs. Before 2.0 when the volume chooser was asked where it should put a file, it did not know if the request was for a write ahead log. In 2.0 it knows this, which allows write ahead logs to be placed on HDFS and table files on S3. This is important because S3 does not support the needs of write ahead logs.
First set S3A settings in core-site.xml
.
<property>
<name>fs.s3a.access.key</name>
<value>KEY</value>
</property>
<property>
<name>fs.s3a.secret.key</name>
<value>SECRET</value>
</property>
<!-- without this setting Accumulo tservers would have problems when trying to open lots of files -->
<property>
<name>fs.s3a.connection.maximum</name>
<value>128</value>
</property>
See S3A docs for more info. To get hadoop command to work with s3 set export HADOOP_OPTIONAL_TOOLS="hadoop-aws"
in hadoop-env.sh
.
Build a relocated hadoop-aws jar using the pom in this gist (see HADOOP-16080) and copy it to all nodes.
mkdir -p /tmp/haws-reloc
cd /tmp/haws-reloc
wget https://gist.githubusercontent.com/keith-turner/f6dcbd33342732e42695d66509239983/raw/714cb801eb49084e0ceef5c6eb4027334fd51f87/pom.xml
mvn package -Dhadoop.version=<your hadoop version>
# the new jar will be in target
ls target/
Modify accumulo-env.sh
to add S3 jars to the classpath. The versions may differ, following versions were included with Hadoop 3.1.1.
CLASSPATH="${CLASSPATH}:/somedir/hadoop-aws-relocated.3.1.1.jar"
CLASSPATH="${CLASSPATH}:${HADOOP_HOME}/share/hadoop/tools/lib/aws-java-sdk-bundle-1.11.271.jar"
CLASSPATH="${CLASSPATH}:${HADOOP_HOME}/share/hadoop/common/lib/commons-lang3-3.4.jar"
Set the following in accumulo.properties
.
instance.volumes=hdfs://<name node>/accumulo
Run accumulo init
but do not start Accumulo. After running Accumulo init
we need to configure storing write ahead logs in HDFS. Set the following
in accumulo.properties. (The following settings need to be updated to refelect changes made after 2.0.0-alpha-2 in #941). The following settings only work for alpha2, NOT for Accumulo 2.0.0 use the new instructions
instance.volumes=s3a://<bucket>/accumulo,hdfs://<name node>/accumulo
general.volume.chooser=org.apache.accumulo.server.fs.PreferredVolumeChooser
general.custom.default.preferred.volumes=s3a://<bucket>/accumulo
general.custom.logger.preferred.volumes=hdfs://<name node>/accumulo
Run accumulo init --add-volumes
to initialize the S3 volume. Doing this
in two steps avoids putting any Accumulo metadata files in S3 during init.
Copy accumulo.properties
to all nodes and start Accumulo.
TODO add shell commands that put new metadata tablets in HDFS in case the metadata table splits.
These instructions are a work in progress and may not result in a stable system. I have run a 24hr test with Accumulo and S3.
I am not completely certain about this, but I don't think S3Guard is needed for regular Accumulo tables. There are two reasons I think this is so. First each Accumulo user tablet stores its list of files in the metdata table using absolute URIs. This allows a tablet to have files on multiple DFS instances. Therefore Accumulo never does a DFS list operation to get a tablets files, it always uses whats in the metadata table. Second, Accumulo gives each file a unique name using a counter stored in Zookeeper and file names are never reused.
Things are sligthly different for Accumulo's metadata. User tablets store their file list in the metdata table. Metadata tablets store their file list in the root table. The root table stores its file list in DFS. Therefore it would be dangerous to place the root tablet in S3 w/o using S3Guard. That is why these instructions place Accumulo metadata in HDFS. Hopefully this configuration allows the system to be consistent w/o using S3Guard.