The LangChain4j framework was created in 2023 with this target:
The goal of LangChain4j is to simplify integrating LLMs into Java applications.
LangChain4j is providing a standard way to:
- create embeddings (vectors) from a given content, let say a text for example
- store embeddings in an embedding store
- search for similar vectors in the embedding store
- discuss with LLMs
- use a chat memory to remember the context of a discussion with an LLM
This list is not exhaustive and the LangChain4j community is always implementing new features.
This post will cover the first main parts of the framework.
Adding LangChain4j OpenAI to our project
Like in all Java projects, it's just a matter of dependencies. Here we will be using Maven but the same could be achieved with any other dependency manager.
As a first step to the project we want to build here, we will be using OpenAI so we just need to add the langchain4j-open-ai
artifact:
<properties>
<langchain4j.version>0.34.0</langchain4j.version>
</properties>
<dependencies>
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-open-ai</artifactId>
<version>${langchain4j.version}</version>
</dependency>
</dependencies>
For the rest of the code we will be using either our own API key, which you can get by registering for an account with OpenAI, or the one provided by LangChain4j project for demo purposes only:
static String getOpenAiApiKey() {
String apiKey = System.getenv(API_KEY_ENV_NAME);
if (apiKey == null || apiKey.isEmpty()) {
Logger.warn("Please provide your own key instead using [{}] env variable", API_KEY_ENV_NAME);
return "demo";
}
return apiKey;
}
We can now create an instance of our ChatLanguageModel:
ChatLanguageModel model = OpenAiChatModel.withApiKey(getOpenAiApiKey());
And finally we can ask a simple question and get back the answer:
String answer = model.generate("Who is Thomas Pesquet?");
Logger.info("Answer is: {}", answer);
The given answer might be something like:
Thomas Pesquet is a French aerospace engineer, pilot, and European Space Agency astronaut.
He was selected as a member of the European Astronaut Corps in 2009 and has since completed
two space missions to the International Space Station, including serving as a flight engineer
for Expedition 50/51 in 2016-2017. Pesquet is known for his contributions to scientific
research and outreach activities during his time in space.
If you'd like to run this code, please check out the Step1AiChatTest.java class.
Providing more context
Let's add the langchain4j
artifact:
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j</artifactId>
<version>${langchain4j.version}</version>
</dependency>
This one is providing a toolset which can help us build a more advanced LLM integration to build our assistant. Here we will just create an Assistant
interface which provides the chat
method which will be calling automagically the ChatLanguageModel
we defined earlier:
interface Assistant {
String chat(String userMessage);
}
We just have to ask LangChain4j AiServices
class to build an instance for us:
Assistant assistant = AiServices.create(Assistant.class, model);
And then call the chat(String)
method:
String answer = assistant.chat("Who is Thomas Pesquet?");
Logger.info("Answer is: {}", answer);
This is having the same behavior as before. So why did we change the code? In the first place, it's more elegant but more than that, you can now give some instructions to the LLM using simple annotations:
interface Assistant {
@SystemMessage("Please answer in a funny way.")
String chat(String userMessage);
}
This is now giving:
Ah, Thomas Pesquet is actually a super secret spy disguised as an astronaut!
He's out there in space fighting aliens and saving the world one spacewalk at a time.
Or maybe he's just a really cool French astronaut who has been to the International
Space Station. But my spy theory is much more exciting, don't you think?
If you'd like to run this code, please check out the Step2AssistantTest.java class.
Switching to another LLM
We can use the great Ollama project. It helps to run a LLM locally on your machine.
Let's add the langchain4j-ollama
artifact:
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-ollama</artifactId>
<version>${langchain4j.version}</version>
</dependency>
As we are running the sample code using tests, let's add Testcontainers to our project:
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>ollama</artifactId>
<version>1.20.1</version>
<scope>test</scope>
</dependency>
We can now start/stop Docker containers:
static String MODEL_NAME = "mistral";
static String DOCKER_IMAGE_NAME = "langchain4j/ollama-" + MODEL_NAME + ":latest";
static OllamaContainer ollama = new OllamaContainer(
DockerImageName.parse(DOCKER_IMAGE_NAME).asCompatibleSubstituteFor("ollama/ollama"));
@BeforeAll
public static void setup() {
ollama.start();
}
@AfterAll
public static void teardown() {
ollama.stop();
}
We "just" have to change the model
object to become an OllamaChatModel
instead of the OpenAiChatModel
we used previously:
OllamaChatModel model = OllamaChatModel.builder()
.baseUrl(ollama.getEndpoint())
.modelName(MODEL_NAME)
.build();
Note that it could take some time to pull the image with its model, but after a while, you could get the answer:
Oh, Thomas Pesquet, the man who single-handedly keeps the French space program running
while sipping on his crisp rosé and munching on a baguette! He's our beloved astronaut
with an irresistible accent that makes us all want to learn French just so we can
understand him better. When he's not floating in space, he's probably practicing his
best "je ne sais quoi" face for the next family photo. Vive le Thomas Pesquet!
🚀🌍🇫🇷 #FrenchSpaceHero
Better with memory
If we ask multiple questions, by default the system won't remember the previous questions and answers. So if we ask after the first question "When was he born?", our application will answer:
Oh, you're asking about this legendary figure from history, huh? Well, let me tell
you a hilarious tale! He was actually born on Leap Year's Day, but only every 400
years! So, do the math... if we count backwards from 2020 (which is also a leap year),
then he was born in... *drumroll please* ...1600! Isn't that a hoot? But remember
folks, this is just a joke, and historical records may vary.
Which is nonsense. Instead, we should use Chat Memory:
ChatMemory chatMemory = MessageWindowChatMemory.withMaxMessages(10);
Assistant assistant = AiServices.builder(Assistant.class)
.chatLanguageModel(model)
.chatMemory(chatMemory)
.build();
Running the same questions now gives a meaningful answer:
Oh, Thomas Pesquet, the man who was probably born before sliced bread but after dinosaurs!
You know, around the time when people started putting wheels on suitcases and calling it
a revolution. So, roughly speaking, he came into this world somewhere in the late 70s or
early 80s, give or take a year or two - just enough time for him to grow up, become an
astronaut, and make us all laugh with his space-aged antics! Isn't that a hoot?
*laughs maniacally*
Conclusion
In the next post, we will discover how we can ask questions to our private dataset using Elasticsearch as the embedding store. That will give us a way to transform our application search to the next level.
Ready to try this out on your own? Start a free trial.
Elasticsearch has integrations for tools from LangChain, Cohere and more. Join our advanced semantic search webinar to build your next GenAI app!