Defining Agent Tools With Function Calling

Resource mentioned in this lesson: www.weatherapi.com

00:00 In this lesson, you’re going to learn how to define agent tools, giving an LLM function calling capabilities. LLMs have limited and static capabilities. They cannot interact with the environment, they cannot interact with your computer, with different services. And so the solution is to write Python code that allows the LLMs to get more information from their environment.

00:24 The diagram on the screen shows the flow of interactions that you’ve covered so far. But now suppose you want to ask the LLM how you should dress if you wanted to go out and walk around in Lisbon.

00:36 Well, the LLM has no way of knowing what the weather is like in Lisbon, and so it cannot give you a good response.

00:44 What can you do? Well, you can create a Python function that interacts with the environment. You can write a Python function that makes an API request to a weather API.

00:55 And then you just tell the LLM, hey, I have this function here that you can use to check the weather. So when you ask for dressing recommendations, the LLM will come back to Pydantic AI and say, I want to call the weather function.

01:11 So Pydantic AI calls your Python function, the Python function returns a result, Pydantic AI takes the result, sends it back to the LLM, and then the LLM uses the result of the function to produce the final answer.

01:26 So let’s see this in code.

01:29 To interact with an API, you’re going to install the module requests. So that’s going to be python -m pip install requests.

01:39 And then you’re going to create a copy of the script you already have so that you can reuse some of the code. So go ahead and create a script called weather.py and open it.

01:52 You’re going to delete the Pydantic model. You don’t want it for now. And then you’re going to define the function that interacts with the weather API. So for that, you want to import the module requests and then define a function called current_weather that accepts a city name as a string and it’s going to return a dictionary.

02:12 It’s going to be the JSON from the weather API. Now the docstring of your function is very important because it’s what the LLM checks to determine whether or not to run your function.

02:23 So let’s say get the current weather from a given city.

02:29 Now use the module requests to get the information from a weather API.

02:35 Now you want to send an API request to a weather API. For example, the API at weatherapi.com has a free tier you can use for this. So you just paste the URL for the request that’s going to include your API key.

02:55 And now you want to request it for an arbitrary city that’s going to depend on the arguments.

03:02 Once you have your response, you’ll want to make sure that you got a valid response. So you can use this helper method, raise_for_status, that makes sure you got a valid HTTP response from the API.

03:16 And then you grab the JSON information from the response. And then you just return that JSON response. And this is what the LLM will work with. Now, the reason you want to have your agent defined above the function is so that you can use the decorator @agent.tool_plain.

03:37 So this registers the function current_weather as a tool that the agent can make use of. And now in your prompt, you just write something like, "How should I dress if I want to walk around Lisbon right now?"

03:54 Note how you’re not required to explicitly mention these points. You just ask a prompt that implicitly you know is going to require checking the weather.

04:04 And one other thing you can do just to see everything working is add a print() at the top with something like "Checking the weather in {city}.” so that you can see if your function gets called. Save the script, open the terminal, and run your script weather.py.

04:23 So now you give it a second, you should see the output checking the weather in Lisbon. And then the LLM gives you a dressing recommendation based on the current weather.

04:34 So it’s saying that it’s 18 degrees Celsius or 64 Fahrenheit and sunny in Lisbon. So you should do whatever, whatever, whatever. And this works for any functions that you define and then register as tools.

04:48 You can see that the LLM gave you a reply that’s based on the current Lisbon weather, which is something that would be impossible if you hadn’t defined a Python function to check the weather.

05:01 There’s a small problem though. Your code is now harder to debug and harder to test because there is a tight coupling between your agent and the tool with the API.

05:14 So in the next lesson, you’re going to learn how to use the runtime context to inject dependencies, making it easier to decouple things and therefore to test your code.

Become a Member to join the conversation.