Real Python Podcast E284 Title Image

Episode 284: Running Local LLMs With Ollama and Connecting With Python

The Real Python Podcast

Feb 13, 2026 45m intermediate ai community editors

Would you like to learn how to work with LLMs locally on your own computer? How do you integrate your Python projects with a local model? Christopher Trudeau is back on the show this week with another batch of PyCoder’s Weekly articles and projects.

Episode Sponsor:

We cover a recent Real Python step-by-step tutorial on installing local LLMs with Ollama and connecting them to Python. It begins by outlining the advantages this strategy offers, including reducing costs, improving privacy, and enabling offline-capable AI-powered apps. We talk through the steps of setting things up, generating text and code, and calling tools.

We also share other articles and projects from the Python community, including the 2026 Python Developers Survey, creating callable instances with Python’s .__call__(), creating maps and projections with GeoPandas, ending 15 years of subprocess polling, discussing backseat software, a retry library that classifies errors, and a peer-to-peer encrypted CLI chat project.

This episode is sponsored by Honeybadger.

Topics:

  • 00:00:00 – Introduction
  • 00:02:37 – Take the Python Developers Survey 2026
  • 00:03:07 – How to Integrate Local LLMs With Ollama and Python
  • 00:08:15 – Sponsor: Honeybadger
  • 00:09:01 – Create Callable Instances With Python’s .__call__()
  • 00:12:13 – GeoPandas Basics: Maps, Projections, and Spatial Joins
  • 00:16:03 – Ending 15 Years of subprocess Polling
  • 00:18:57 – Video Course Spotlight
  • 00:20:23 – Backseat Software – Mike Swanson
  • 00:39:06 – cmd-chat: Peer-to-Peer Encrypted CLI Chat
  • 00:41:58 – redress: A Retry Library That Classifies Errors
  • 00:43:56 – Thanks and goodbye

News:

Topics:

Discussion:

Projects:

Additional Links: