Amazon ElasticSearch Java Integration Testing

Jaime Casero
6 min readFeb 3, 2021

--

Amazon offers ElasticSearch Cloud service. As you may know, Amazon is based on a fork called Opendistro after some issues with the original product. This Amazon product has its own lifecycle with versioning tags different from the original ElasticSearch product. So when it comes to test your Java client code against server to check integration, it’s not easy to come up with a valid environment that closely simulates what is happening with the actual Amazon product.

So, this article is offering a working example on how to do integration testing for your Java clients as part of your project build, and detect integration issues before even going to the Amazon environment. This will allow your development team to identify issues very soon in your process, and be capable of reproducing production issues in the dev env for quick fixing.

Dependencies

Let’s start with the dependencies list:

<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-engine</artifactId>
<version>5.4.0</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>testcontainers</artifactId>
<version>1.15.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>elasticsearch-rest-high-level-client</artifactId>
<version>7.10.2</version>
</dependency>

We are using JUnit5 here, but we could use any other version, or testing framework like TestNG, is not critical for this setup.

Then we need Testcontainers, which is the basic testing tool required here to work with Docker images, and being able to bootstrap the Amazon ElasticSearch server for testing. For futher Testcontainers Requirements check link.

The Java ElastiSearch Client dependency which will be used to send queries to the server.

Test Skeleton

import org.apache.commons.io.IOUtils;
import org.apache.http.HttpHost;
import org.apache.http.auth.AuthScope;
import org.apache.http.auth.UsernamePasswordCredentials;
import org.apache.http.client.CredentialsProvider;
import org.apache.http.conn.ssl.NoopHostnameVerifier;
import org.apache.http.entity.ContentType;
import org.apache.http.impl.client.BasicCredentialsProvider;
import org.apache.http.nio.entity.NStringEntity;
import org.elasticsearch.action.admin.indices.delete.DeleteIndexRequest;
import org.elasticsearch.action.bulk.BulkRequest;
import org.elasticsearch.action.bulk.BulkResponse;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.client.Request;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.Response;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestClientBuilder;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.client.indices.CreateIndexRequest;
import org.elasticsearch.client.indices.GetIndexTemplatesRequest;
import org.elasticsearch.client.indices.GetIndexTemplatesResponse;
import org.elasticsearch.common.xcontent.XContentType;
import org.junit.ClassRule;
import org.junit.jupiter.api.AfterAll;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Test;
import org.testcontainers.containers.GenericContainer;
import org.testcontainers.utility.DockerImageName;

import javax.net.ssl.KeyManager;
import javax.net.ssl.SSLContext;
import javax.net.ssl.TrustManager;
import javax.net.ssl.X509TrustManager;
import java.io.IOException;
import java.io.InputStream;
import java.nio.charset.StandardCharsets;
import java.security.SecureRandom;
import java.security.cert.CertificateException;
import java.security.cert.X509Certificate;

import static org.junit.Assert.assertTrue;
public class OpendistroCompTest {

public static final DockerImageName OPENDISTRO_IMAGE = DockerImageName.parse("amazon/opendistro-for-elasticsearch:1.11.0");

private static final String ELASTICSEARCH_USERNAME = "admin";
private static final String ELASTICSEARCH_PASSWORD = "admin";

@ClassRule
public static GenericContainer<?> container;

private static RestHighLevelClient esClient;
private static RestClient restClient;

@Test
public void checkServerHealth() throws Exception {

Response response = restClient.performRequest(new Request("GET", "/_cluster/health"));
}

private static class DefaultTrustManager implements X509TrustManager {

@Override
public void checkClientTrusted(X509Certificate[] arg0, String arg1) throws CertificateException {}

@Override
public void checkServerTrusted(X509Certificate[] arg0, String arg1) throws CertificateException {}

@Override
public X509Certificate[] getAcceptedIssuers() {
return null;
}
}

@BeforeAll
public static void prepareESContainer() throws Exception {
// Create the elasticsearch container.
// configure the SSLContext with a TrustManager
//this is because local ES certificate is not bound to the proper ip
SSLContext ctx = SSLContext.getInstance("TLS");
ctx.init(new KeyManager[0], new TrustManager[] {new DefaultTrustManager()}, new SecureRandom());
SSLContext.setDefault(ctx);

// Start the container. This step might take some time...
container =
new GenericContainer<>(OPENDISTRO_IMAGE)
.withExposedPorts(9200, 9600).withEnv("discovery.type", "single-node").waitingFor(Wait.forLogMessage(".*Hot-reloading of audit configuration is enabled.*\\n", 1));
container.start();
String hostAddr = container.getContainerInfo().getNetworkSettings().getIpAddress();
HttpHost httpHost1 = new HttpHost(hostAddr, 9200, "https");


// Create the secured client.
final CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
credentialsProvider.setCredentials(AuthScope.ANY,
new UsernamePasswordCredentials(ELASTICSEARCH_USERNAME, ELASTICSEARCH_PASSWORD));

RestClientBuilder builder = RestClient.builder(httpHost1).setHttpClientConfigCallback(
httpClientBuilder -> {
httpClientBuilder.setSSLHostnameVerifier(NoopHostnameVerifier.INSTANCE);
return httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider);
});
esClient = new RestHighLevelClient(builder);
restClient = RestClient.builder(httpHost1).setHttpClientConfigCallback(
httpClientBuilder -> {
httpClientBuilder.setSSLHostnameVerifier(NoopHostnameVerifier.INSTANCE);
return httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider);
}).build();

}




@AfterAll
public static void cleanUp() throws Exception {
// stop the client
esClient.close();
restClient.close();

// Stop the container.
container.stop();
container.close();
}
}

As you can see we are using the Amazon Docker image. In our case we use version 1.11.0, which seems to match the one we get on Amazon Cloud with 7.9 version. Amazon provides this FAQ , but it’s not fully comprehensive on versioning in my opinion. To check version matching just send a HTTP GET to the Elasticsearch root server and you will get something like this:

curl -XGET https://172.17.0.3:9200 -u admin:admin — insecure
{
“name” : “4c3f689ddd76”,
“cluster_name” : “docker-cluster”,
“cluster_uuid” : “n96mfPN3QYOx3kPb2NdcWg”,
“version” : {
“number” : “7.9.1”,
“build_flavor” : “oss”,
“build_type” : “tar”,
“build_hash” : “083627f112ba94dffc1232e8b42b73492789ef91”,
“build_date” : “2020–09–01T21:22:21.964974Z”,
“build_snapshot” : false,
“lucene_version” : “8.6.2”,
“minimum_wire_compatibility_version” : “6.8.0”,
“minimum_index_compatibility_version” : “6.0.0-beta1”
},
“tagline” : “You Know, for Search”
}

Properties worth mentioning are:

  • Number: This is the Amazon product version. It should correlate with the one you set in Amazon Console for your ElasticSearch domain.
  • Lucene_version : This is the version of internal Lucene search engine used by the server.
  • Minimum wire Compatibility version: Minimum version for other nodes in the cluster to coordinate properly.
  • Minimum Index Compatibility : Same as previous but related to reading files from an older node.

During “prepareESContainer” BeforeAll method we prepare the container and start it. As explained in docker readme, we expose ports 9200 and 9600. We also set environment variable to have single node server:

container =
new GenericContainer<>(OPENDISTRO_IMAGE)
.withExposedPorts(9200, 9600).withEnv("discovery.type", "single-node").waitingFor(Wait.forLogMessage(".*Hot-reloading of audit configuration is enabled.*\\n", 1));;
container.start();

Notice we added a condition to wait for a log message to be printed by server. This message is printed when API interface is ready to receive requests.

Then we go on by creating the Java clients. You will notice there is some security tweaks to allow accessing the local server. If that is critical for your testing, you will need to figure out how to access the local instance using the proper localhost defined in the server certificate, so they match properly:

RestClientBuilder builder = RestClient.builder....

The presented test is as simple as sending a request to the Health resource:

public void checkServerHealth() throws Exception {

Response response = restClient.performRequest(new Request("GET", "/_cluster/health"));
}

Of course after testing we just cleanup all the resources as usual:

@AfterAll
public static void cleanUp() throws Exception {
// stop the client
esClient.close();
restClient.close();

// Stop the container.
container.stop();
container.close();
}

You are basically ready to add your own tests so go ahead and enjoy, cheers…

Index Management and Data population

Your testsuite may involve index management, or just queries, or maybe both. In our case we wanted to check our Java Lambda code queries, but this requires to have actual indexes and data in the server to exercise the queries. I’m sharing some tools we use to create indexes, templates, and load some data.

private static String loadResource(String name) throws IOException {
final InputStream data = CdrCompTest.class.getResourceAsStream(name);
String template = IOUtils.toString(data, StandardCharsets.UTF_8);
return template;
}

private static void createIndex(String name) throws IOException {
CreateIndexRequest request = new CreateIndexRequest(name);
esClient.indices().create(request, RequestOptions.DEFAULT);
}

private static void addDoc(String index, String dataFile) throws Exception {
String jsonString = loadResource(dataFile);
String[] ss = jsonString.split(";");

BulkRequest request = new BulkRequest();
for (String s : ss) {
IndexRequest iRequest = new IndexRequest(index, "_doc");
iRequest.source(s, XContentType.JSON);
request.add(iRequest);
}

BulkResponse bulkResponse = esClient.bulk(request, RequestOptions.DEFAULT);
}

private static void prepareIndexes(GenericContainer container) throws Exception {
// creating of index templates
for (String s : templateNames) {
String template = loadResource("/" + s + ".txt");

Request request = new Request("PUT", "/_template/" + s);
NStringEntity entity = new NStringEntity(template, ContentType.APPLICATION_JSON);
request.setEntity(entity);
Response response = restClient.performRequest(request);

GetIndexTemplatesRequest p = new GetIndexTemplatesRequest(s);
GetIndexTemplatesResponse resp = esClient.indices().getIndexTemplate(p, RequestOptions.DEFAULT);
int i1 = 0;
}

// creating of indeces
for (String name : indexNames) {
createIndex(name);
}
}

private static void deleteIndex(String name) throws IOException {
DeleteIndexRequest request = new DeleteIndexRequest(name);
esClient.indices().delete(request, RequestOptions.DEFAULT);
}

We invoke “prepareIndexes” from the BeforeAll setup method. This allow us to create index templates, indexes and then load some actual data:

prepareIndexes(container);

for (String s : dataNames) {
addDoc(s, "/" + s + ".txt");
}

The actual index templates and data files containing documents are located in the test resources directory, and we load them from the classloader.

Conclusion

This small test environment should allow you to test any interaction with the server, and check you client code is properly behaving. We hope this article allows other teams to improve their process by adding integration test to their Java Amazon ElasticSearch clients.

This solution was a combined effort from the Core Network team in Telestax, and hence the achievement comes from the team, kudos for the team!!

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Jaime Casero
Jaime Casero

Written by Jaime Casero

Software Engineer with 20 years experience in the Telco sector. Currently working at 1nce.

Responses (1)

Write a response