Extending Your Chain
00:00 Fortunately, LangChain chains are a lot more flexible than, say, the iron chain you might use to lock up your bike. You can extend them in multiple ways. You can just add runnable components at the end, somewhere in the middle.
00:13
And this declarative syntax by using these pipes makes that very straightforward to do. So in some of the previous lessons, I’ve gone and displayed the .content
attribute of one of these AI messages so that it’s better readable, the output, because we’re not that interested at the moment in the analytics of the how many tokens were used, etc.
00:31
And there is actually an output parser that does that for you that just gives you back the raw text from one of those AI messages. And it’s the StrOutputParser
object.
00:42 It’s also a runnable component, so that means you can just pipe it at the end of the chain if you’re just interested to actually read the response of the model.
00:50
And let’s try that out so that you can see the extensibility of the LangChain chains as well. from
00:57
langchain_core.output_parsers
import, the StringOutputParser
.
01:06
And then our review_chain
consisted of the review_prompt_template
piped into the chat_model
. And then we’re going to pipe the output of the chat_model
into a StrOutputParser
object.
01:22 So I’m going to just go ahead and instantiate it right there into the chain, you could also assign that to a variable, but I’m not really doing anything fancy with this object.
01:33
So I’m just passing it on into an instance of that directly. And now if I go ahead and invoke the review_chain
again, passing it context and question, I’m not going to get back the AI message, but I’m going to get back just a plain string.
01:47 That is the answer to the question that I passed. In that case, it says ‘Yes, one patient mentioned having a great stay, indicating a positive experience.’
01:55
And that’s again, the same concept happened, right? You’re passing the output of the chat_model
, which is an AI message. You’re piping that onto another runnable component, which is a StrOutputParser
.
02:07
And what the StringOutputParser
does is it takes out the .content
attribute of that AI message and displays it back. So if you’re just interested in the answer, just pipe it to a StrOutputParser
object.
02:18 And yeah, there you go. You have basically a chatbot now directly that you can ask questions, right? So I could invoke the chain again with a different question:
02:32 “Were there negative experiences?”
02:37 Maybe my user is asking that.
02:40
So you can see the model replies accordingly, and you just get the string output again because you’re parsing it through the StrOutputParser
.
02:48 In short, using this declarative syntax for LangChain chains makes it quite flexible and straightforward to assemble LLM workflows with different steps.
Become a Member to join the conversation.