Loading video player…

Assembling Labeled Instructions

00:00 Now you’ve seen that you can send simple prompts, just plain text to chat models using LangChain, and that works pretty straightforwardly, but that’s not where the power lies.

00:10 You’ve seen that the reply was an AIMessage. So messages are another thing that you can send to a model. And messages can have roles such as SystemMessage, HumanMessage, and AIMessage.

00:21 where the SystemMessage gives instructions on how the model should behave, HumanMessage is usually user input, and the AIMessage is then the response of the model.

00:29 You’ve already seen the AIMessage. The cool thing is that you can combine a couple of these labeled classes into one prompt that you then send off.

00:37 Let’s take a look at that in the REPL again. I’ll need to import the message types. So I’m going to say from langchain.schema

00:49 .messages import HumanMessage and SystemMessage.

00:57 Now I can use these two classes to construct messages. I’m going to put them all into a list of messages. So I’m going to say messages = open up a list.

01:08 And then here, I’ll first make a SystemMessage, not an error. Hopefully I’ll make a SystemMessage that has as its content,

01:22 “You’re an assistant knowledgeable about healthcare. Only answer healthcare-related questions.” So there’s the system prompt that I’m giving.

01:31 Let’s stick with that for now, and I’ll add the human message in a moment.

01:36 So we have this messages list that currently only contains this one message, but you can see that it’s a labeled message. So it’s an instance of SystemMessage and that has a certain content.

01:46 And you can see that there’s also additional quirks and response metadata. So there’s additional attributes on this SystemMessage class that you’ll get to know better throughout this course.

01:57 Now, I want to add another message to this. So I’m just going to call it question because I want to change this out later on. So I’ll create a question that is a HumanMessage with the content where now I’m going to ask, “What is blood pressure?” again.

02:19 Now let me append that.

02:25 So now my messages list contains a SystemMessage and a HumanMessage. And I can pass this list of messages directly to the invoke() method of the chat model.

02:37 So I can say chat_model.invoke( messages) and send it off.

02:47 And in this case, the first response is not going to be very different as what we got before because we told the model that it should answer healthcare-related questions.

02:56 So of course we get a nice answer for what blood pressure is that looks quite similar to before. But I just did this so that I can show you now if I change this question, for example, to

03:09 “How do I change a tire?”

03:13 I’m going to remove it

03:23 and put the new one in there. So you can see now my message contains the same SystemMessage that tells the model that it should only answer healthcare-related questions.

03:32 And then the HumanMessage that would be an input from a user, for example, that asks this healthcare chatbot, “How do I change a tire?” Now I’m going to go ahead and send that to the chat model by passing it to invoke(). You’ll see a different response.

03:48 It shouldn’t tell the user how to change the tire, so it also doesn’t. It replies with, “I’m here to help with healthcare-related questions. If you have any questions about health or medical topics, feel free to ask.” So what you effectively did here is you set up two different types of questions.

04:03 A SystemMessage that gives instructions to the model, and a HumanMessage that mimics a user, a possible user input. And the interface for sending this API call is the same that you did before with just a plain question, but now you have these objects that you’re working with and what gets sent to the model eventually is text.

04:21 But in between here, LangChain does a lot of things that allow you to build complexity that you’ll see in the later lessons of this course. And you can remember that you’re working here with objects rather than just plain strings, which gives you a lot of flexibility that LangChain builds on top of.

04:39 So at this point you might think, sure these are custom objects, but I’m still mostly interested in this content string. Like I’m not passing anything additional here at this point.

04:47 And you’re right. But in the next lesson you’ll see how having these custom objects that represent prompts can help you in building more maintainable prompts because we’ll start looking into prompt templates.

Avatar image for Odysseas Kouvelis

Odysseas Kouvelis on May 24, 2025

I noticed that messages are imported from schema:

from langchain.schema.messages import HumanMessage, SystemMessage

I am using langchain_core:

from langchain_core.messages import (
    AIMessage, HumanMessage, SystemMessage
)

Question: Which option is more future-proof?

… assuming a langchain 0.3.x version…

Become a Member to join the conversation.