Music: Living Voyage by Kevin MacLeod is licensed under a Creative Commons Attribution 4.0 license. creativecommons.org/licenses/by/4.0
Asking Gemini About the Project
00:00 Now let’s get a bit more familiar with the codebase and use Gemini CLI to understand what the project does. Of course, you could go into the folder and investigate the files by yourself, but hey, where’s the fun in this?
00:13 Let’s use Gemini for it. So what does this project do?
00:22 In contrast to the prompt you asked earlier in this course, this time Gemini takes way longer. You see that Gemini is working by the little spinner on the left side of a little message, and Gemini also updates you about the process along the line.
00:41
So, here it said that it needs to examine the project files, where it’ll start. Here it’s the pyproject.toml file for general project information, which is a good idea.
00:51
And then going into src/todolist/main and src/todolist/cli to identify the main entry points and core functionality. What I really like about this is that this is also something where, depending on the project, you can learn how to investigate projects on your own if you don’t have your AI assistant with you, and we hit the first error.
01:15 Well, I say first because I somehow expect that we might hit other errors as well with Gemini, and that’s a little bit the downside of it being a free model, and you are allowed to do all of this for free.
01:27 So here it looks like we are already hitting some quota limitation. Honestly, I’m a bit surprised that we are already hitting this, but we need to roll with it.
01:38
So it says “API Error: code 429. Resource exhausted. Please try again later”. Then there is a link where we can learn more about it, and it says, “Possible quota limitations in place or slow response times detected.
01:52
Switching to the gemini 2.5 flash model for the rest of the session”. That is a bit ambiguous. We don’t really know if we hit the limit or if there is just a slow response time.
02:04
What we know is that we switched to a different model. That means we can now try this command again and see if it works. But since we are mentioning models here, let’s quickly check the /model command.
02:19
That’s the command to select your model, and by default this is set to Auto. So the system chooses the best model for our task, and that could mean that we might have used the Pro model while we are, we were looking into our project, and we’ve hit our quota limitations faster.
02:39
So now we could manually change to the Flash model, but it sounds like Gemini did that for us already. So let’s hit Escape, but just know that you can always have more fine-grained control by using the /model command.
02:53
Alright, back in the prompt, you can use the arrow up key to go to a former message, and that was “What does this project do?” Let’s hit Enter and see if Gemini succeeds now, and this time we don’t get a red error message, and we get a slightly different answer saying, “Based on the pyproject.toml file, this project is a command-line todo application”.
03:19
So that is very similar to what it was starting to say last time, but this time it goes a bit deeper. The dependencies suggest it uses a database peewee to store tasks, rich for terminal formatting, platformdirs for locating system directories, and OpenAI for potential AI-powered features.
03:40 Then it goes deeper and reads the file, and then comes out with the verdict that this project is a command-line todo application, and you can use it to create and manage multiple to-do lists, add, remove, and rename tasks, and so on.
03:53 In the last line it says it also has a neat feature that uses AI to automatically assign a relevant emoji to each new task you create. Okay. Now ,the interesting point here is that although you work with the same data, and although you might use the same model as I did here, your output can look a bit different, and that is because AI models are inherently non-deterministic.
04:18 So you might see a slightly different output than the one I have here.
04:24 And actually, let me be more precise here. Even if you run the same prompt more than once, generally you can assume that you will get slightly different results.
04:34 Okay, and before we move on to the next lesson, I’m curious about this emoji feature that Gemini was talking about. So let’s ask a follow-up question. How does the emoji feature work?
04:55
So Gemini started by saying that “The emoji feature is triggered when a new task is added”, and the add function calls find_matching_emojis().
05:05
So then Gemini goes on and checks the emojis.py file to see the implementation, and then it ends with a three-point list and says, “The emoji feature uses the OpenAI API to pick an emoji for your task”.
05:20 And “Here is a breakdown how it works”. First, API key check; second, prompting the AI; and third, parsing the response, with a bit more detail here.
05:31 Okay, that sounds like an interesting feature, and I’m actually curious to try it out in the next lesson.
Become a Member to join the conversation.
