Getting Started with LangChain4j and Spring Boot

The LangChain4j framework is an opensource library for integrating LLMs in our Java applications. It is inspired by LangChain, popular in Python ecosystem, for streamlined development processes and APIs. You can read the features of Langchain4j and other theoretical concepts on its official Github page.

This Spring Boot tutorial aims at Langchain4j Chat APIs to get started with and run a few examples to give you a high-level understanding.

1. Langchain4j API

LangChain4j is built around several core classes/interfaces designed to handle different aspects of interacting with LLMs.

1.1. Chat and Language Models

The language model is the core API that provides methods to interact with LLMs, send prompts, and receive and parse their responses. Langchain4J provides the following interfaces to interact with different types of LLMs:

Language ModelDescriptionExample LLM
ChatLanguageModelRepresents a language model that has a chat interface, such as ChatGPT-3.5. Its methods accept one or more ChatMessage messages and return AiMessage generated by the model.GPT-4
StreamingChatLanguageModelRepresents a language model with a chat interface that can stream a response one token at a time. It allows for receiving responses in real time as the model processes the input.GPT-4 Stream
EmbeddingModelRepresents a model that can translate text into an Embedding. These embeddings are useful for semantic search, clustering, and classification tasks.Ada Embedding
ImageModelRepresents a model that can generate and edit images from textual descriptions using deep learning models.DALL-E
ModerationModelRepresents a model that can classify and filter if the text contains harmful content.Moderation
ScoringModelRepresents a model that can be used for scoring text on various criteria like sentiment, grammar, and relevance.DistilBERT (finetuned on SST-2)

1.2. Chat Messages

A chat message is a text sent or received from a model. Langchain4j supports the following message types:

  • SystemMessage: represents the instructions for what the LLM’s role is in this conversation, how it should behave, in what style to answer, etc.
  • UserMessage: represents a message from the user and usually contains user queries in the form of text and images.
  • AiMessage: represents a message generated, by the AI, in response to the UserMessage. It can contain either a plain text response or a request to execute a tool (ToolExecutionRequest, generally, an API to fetch real-time data such as currency rates, weather data, etc.).
  • ToolExecutionResultMessage: represents the result of the ToolExecutionRequest.

1.3. Chat Memory

The ChatMemory represents the memory (history) of a chat conversation. In any conversation, we have to feed the model all previous messages and ensure that messages fit within the model’s context window. Primarily, acting as a container for chat messages, the ChatMemory provides additional features such as eviction policy and persistence.

It is worth noticing that Langchain4j currently offers only “memory”, not “history”. To maintain the entire history, we have to do it manually.

The two main implementations of ChatMemory interface are:

  • MessageWindowChatMemory: acts as a sliding window, retaining the N most recent messages and evicting older ones that no longer fit.
  • TokenWindowChatMemory: acts as a sliding window, retaining the N most recent tokens. It does not store the partial messages, so even if only a few tokens have to be removed a from message, the complete message is removed from memory.

2. Spring Boot Configuration

The very first logical step is to include the necessary dependencies in the application. We can find the necessary dependency in the Maven repository. For example, to interact with OpenAI LLMs, we need to include the ‘langchain4j-open-ai’.

<dependency>
  <groupId>dev.langchain4j</groupId>
  <artifactId>langchain4j-open-ai</artifactId>
  <version>0.31.0</version>
</dependency>

If you are using a Spring boot application, you can checkout the starter project as well. This way, Spring Boot’s autoconfiguration feature will create necessary beans behind the scene and you can directly start using them.

<dependency>
  <groupId>dev.langchain4j</groupId>
  <artifactId>langchain4j-open-ai-spring-boot-starter</artifactId>
  <version>0.31.0</version>
</dependency>

For example, when we include the above ‘langchain4j-open-ai-spring-boot-starter’ dependency then Spring boot will automatically create and provide the ChatLanguageModel and StreamingChatLanguageModel beans based on which property is available in the application.properties file.

langchain4j.open-ai.chat-model.api-key=${OPENAI_API_KEY}
langchain4j.open-ai.streaming-chat-model.api-key=${OPENAI_API_KEY}

You can find all the possible auto-configured beans and their related properties in the AutoConfig.java file.

You can find the available starters on the Github page.

3. Initializing a ChatModel

Although, Spring Boot provides a default configured ChatModel implementation based on the imported dependencies, and it is good enough. We can still override the auto-configuration only by defining the necessary properties. For example, we can customize the OpenAiChatModel bean with the following properties:

langchain4j.open-ai.chat-model.api-key=${OPENAI_API_KEY}
langchain4j.open-ai.chat-model.model-name=gpt-3.5-turbo
langchain4j.open-ai.chat-model.temperature=0.7
langchain4j.open-ai.chat-model.log-requests = true
langchain4j.open-ai.chat-model.log-responses = true

Still, if we want to do it programmatically and create the bean from scratch, we can use the ChatModel.builder() to do so:

@Configuration
public class LlmConfig {
	
	@Bean
	OpenAiChatModel openAiChatModel() {
	  
	  return OpenAiChatModel.builder()
      .apiKey(...)
      .modelName(...)
      .temperature(...)
      .logRequests(...)
      .logResponses(...)
      .build();
	}
}

4. First Call to LLM

Once the required LanguageModel bean is ready, we can call its generate() message to send a prompt to the LLM and receive the LLM response.

@Autowired
ChatLanguageModel model;

@Bean(name = "mainApplicationRunner")
ApplicationRunner applicationRunner() {

  String responseText = model.generate("Hello, how are you");
  System.out.println(responseText);
}

The program output:

Hello! I'm just a computer program, so I don't have feelings, but I'm here to help you with anything you need. How can I assist you today?

If we have enabled the request and response logging in the model configuration or the properties file, we can verify the sent prompt and received response in the application logs:

2024-06-02T01:29:20.040+05:30 DEBUG 17668 --- [  restartedMain] d.a.openai4j.RequestLoggingInterceptor   : Request:
- method: POST
- url: https://api.openai.com/v1/chat/completions
- headers: [Authorization: Bearer sk-pr...W9], [User-Agent: langchain4j-openai]
- body: {
  "model": "gpt-3.5-turbo",
  "messages": [
    {
      "role": "user",
      "content": "Hello, how are you"
    }
  ],
  "temperature": 0.7
}

2024-06-02T01:29:21.815+05:30 DEBUG 17668 --- [  restartedMain] d.a.openai4j.ResponseLoggingInterceptor  : Response:
- status code: 200
- headers: [...HIDDEN...]
- body: {
  "id": "chatcmpl-9VPAn21JksWMomyYKZiXHf31maA4I",
  "object": "chat.completion",
  "created": 1717271961,
  "model": "gpt-3.5-turbo-0125",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! I'm just a computer program, so I don't have feelings, but I'm here to help you with anything you need. How can I assist you today?"
      },
      "logprobs": null,
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 12,
    "completion_tokens": 35,
    "total_tokens": 47
  },
  "system_fingerprint": null
}

5. Sending System and User Prompts

Moving on, we can create different message type objects and send them using the generate() method. In the following example, we are creating the system message and user message, and sending the prompts to the LLM. Later, we receive the LLM response and print it in the console.

SystemMessage systemMessage = SystemMessage.from("""
    You are a helpful AI assistant that helps people find information.
    Your name is Alexa
    Start with telling your name and quick summary of answer you are going to provide in a sentence.
    Next, you should reply to the user's request. 
    Finish with thanking the user for asking question in the end.
    """);

String userMessageTxt = """
    Tell me about {{place}}.
    Write the answer briefly in form of a list.
    """;

UserMessage userMessage = UserMessage.from(userMessageTxt.replace("{{place}}", "USA"));

Response<AiMessage> response = chatLanguageModel.generate(systemMessage, userMessage);
System.out.println(response.content());

The program output:

AiMessage { text = "Hello, I'm Alexa, and I'll provide you with a brief list about the USA. Here are some key points about the United States of America:

1. The USA is a federal republic composed of 50 states.
2. It is located in North America and is the third-largest country in the world by total area.
3. The capital city is Washington, D.C.
4. The official language is English.
5. The currency used is the United States Dollar (USD).

Thank you for asking about the USA!" toolExecutionRequests = null }

If the logging is ON we can inspect the request and response as well.

2024-06-02T01:39:41.007+05:30 DEBUG 252 --- [  restartedMain] d.a.openai4j.RequestLoggingInterceptor   : Request:
//...
- body: {
  "model": "gpt-3.5-turbo",
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful AI ...\n"
    },
    {
      "role": "user",
      "content": "Tell me about USA.\nWrite the answer briefly in form of a list.\n"
    }
  ],
  "temperature": 0.7
}
2024-06-02T01:39:43.491+05:30 DEBUG 252 --- [  restartedMain] d.a.openai4j.ResponseLoggingInterceptor  : Response:
- status code: 200
- headers: [...]
- body: {
  //...
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello, I'm Alexa, and I'll provide you with a brief list about the USA..."
      },
      "logprobs": null,
      "finish_reason": "stop"
    }
  ],
  //...
}

6. Conclusion

In this LangChain4j tutorial, we discussed the basics and core concepts. We also learned the integration between LangChain4j and Spring Boot and how its autoconfiguration created some beans just by creating and providing the necessary properties.

We also learned to access the initialized LanguageModel beans to interact with LLMs, and log the interactions in the log files.

Happy Learning !!

Source Code on Github

Comments

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments

About Us

HowToDoInJava provides tutorials and how-to guides on Java and related technologies.

It also shares the best practices, algorithms & solutions and frequently asked interview questions.