Requirements
editRequirements
editBefore running elasticsearch-hadoop, please do check out the requirements below. This is even more so important when deploying elasticsearch-hadoop across a cluster where the software on some machines might be slightly out of sync. While elasticsearch-hadoop tries its best to fall back and do various validations of its environment, doing a quick sanity check especially during upgrades can save you a lot of headaches.
make sure to verify all nodes in a cluster when checking the version of a certain artifact.
elasticsearch-hadoop adds no extra requirements to Hadoop (or the various libraries built on top of it, such as Pig) or Elasticsearch however as a rule of thumb, do use the latest stable version of the said library (checking the compatibility with Hadoop and the JDK, where applicable).
JDK
editJDK level 8 (at least u20 or higher). An up-to-date support matrix for Elasticsearch is available here. Do note that the JVM versions are critical for a stable environment as an incorrect version can corrupt the data underneath as explained in this blog post.
One can check the available JDK version from the command line:
$ java -version java version "1.8.0_45"
Elasticsearch
editWe highly recommend using the latest Elasticsearch (currently 7.17.25). While elasticsearch-hadoop maintains backwards compatibility with previous versions of Elasticsearch, we strongly recommend using the latest, stable version of Elasticsearch. You can find a matrix of supported versions here.
The Elasticsearch version is shown in its folder name:
$ ls elasticsearch-7.17.25
If Elasticsearch is running (locally or remotely), one can find out its version through REST:
$ curl -XGET http://localhost:9200 { "status" : 200, "name" : "Dazzler", "version" : { "number" : "7.17.25", ... }, "tagline" : "You Know, for Search" }
Hadoop
editelasticsearch-hadoop is compatible with Hadoop 2 and Hadoop 3 (ideally the latest stable version). It is tested daily against Apache Hadoop, but any distro compatible with Apache Hadoop should work just fine.
To check the version of Hadoop, one can refer either to its folder or jars (which contain the version in their names) or from the command line:
$ bin/hadoop version Hadoop 3.3.1
Apache Hive
editApache Hive 0.10 or higher. We recommend using the latest release of Hive (currently 2.3.8).
One can find out the Hive version from its folder name or command-line:
$ bin/hive --version Hive version 2.3.8
Apache Pig
editPig 0.10.0 or higher. We recommend using the latest release of Pig (currently 0.15.0).
In a similar fashion, Pig version can be discovered from its folder path or through the command-line:
$ bin/pig -i Apache Pig version 0.15.0
Apache Spark
editSpark 1.3.0 or higher. We recommend using the latest release of Spark (currently 3.2.0). As elasticsearch-hadoop provides native integration (which is recommended) with Apache Spark it does not matter what binary one is using. The same applies when using the Hadoop layer to integrate the two as elasticsearch-hadoop supports the majority of Hadoop distributions out there.
The Spark version can be typically discovered by looking at its folder name:
$ pwd /libs/spark/spark-3.2.0-bin-XXXXX
or by running its shell:
$ bin/spark-shell ... Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 3.2.0 /_/ ...
Apache Spark SQL
editIf planning on using Spark SQL make sure to add the appropriate Spark SQL jar as a dependency. While it is part of the Spark distribution,
it is not part of the Spark core jar but rather has its own jar. Thus, when constructing the classpath make sure to
include spark-sql-<scala-version>.jar
or the Spark assembly : spark-assembly-3.2.0-<distro>.jar
elasticsearch-hadoop supports Spark SQL 1.3 though 1.6, Spark SQL 2.x, and Spark SQL 3.x. elasticsearch-hadoop supports Spark SQL 2.x on Scala 2.11 through its main jar. Since Spark 1.x, 2.x, and 3.x are not compatible with each other, and Scala versions are not compatible, multiple different artifacts are provided by elasticsearch-hadoop. Choose the jar appropriate for your Spark and Scala version. See the Spark chapter for more information.
Apache Storm
editStorm 1.0.0 or higher. Do note that Storm 1.0.0 broke backwards compatibility with the previous versions (by changing the package name) however upgrading is easy and recommended. We recommend using the latest release of Storm (currently 1.0.1).
One can discover the Storm version by looking at its folder or by invoking the command:
$ bin/storm version 1.0.1