How I develop Django applications

Documenting my Django dev set up in 2024.

I just got off a call where I was helping a SaaS Pegasus customer who was just learning Django get up and running locally. During the call, I found myself explaining my own setup for what felt like the 100th time, and realized that I’ve never written it down.

So that’s what this is! I’m not saying it’s the best way to develop Django, but it does work really well for me.

The big picture summary is: services run in Docker, code runs natively.

Why this set up? Well, services are like appliances. I don’t care how they work, I don’t want to change them, I just want them to work, and work consistently. Docker is perfect for this.

Code, on the other hand, is malleable. I want to make changes in my IDE and instantly see them reflected somewhere. I want to add print statements, set breakpoints, edit the code in my site-packages, and just generally mess around! And while I know you can do all that stuff in Docker, it’s way easier if it’s running natively.

Ok, here’s a few more details.

Services run in Docker

Every Django project I run uses Postgres and Redis—Postgres for the database, and Redis as a cache and a message broker for projects that use Celery.

Both Postgres and Redis run via the following docker-compose.yml file:

services:
  db:
    image: postgres
    # persist data beyond lifetime of container
    volumes:
      - postgres_data:/var/lib/postgresql/data/
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres
    ports:
      - "127.0.0.1:5432:5432"
  redis:
    image: redis
    # persistent storage
    command: redis-server --appendonly yes
    volumes:
      - redis_data:/data
    ports:
      - "127.0.0.1:6379:6379"
volumes:
  postgres_data:
  redis_data:

I run docker compose up -d on system boot so these services are always running. They are also exposed on the standard ports (5432 for Postgres and 6379 for Redis) on my localhost, so I can access them from any running process.

All of my projects share a single Postgres and Redis instance, using different databases. For Redis you can do this by appending a number to the end of your redis URL, in settings.py like this:

REDIS_URL = 'redis://localhost:6379/0'  # change the 0 to a 1 for a new project
CELERY_BROKER_URL = CELERY_RESULT_BACKEND = REDIS_URL

Python runs natively

I run my Python code natively, in a virtual environment, using virtualenv, venv, uv.

Using uv is a new thing for me, but it does seem unilaterally better than the alternatives. I’m still fully integrating it into my dev workflows.

Each project gets its own virtual environment, which gets linked in my IDE. I still use virtualenvwrapper and workon to navigate between environments on the command line, though this might change once I adopt uv more broadly.

I code in PyCharm Cursor

I switched from PyCharm to Cursor a couple months ago and mostly haven’t looked back. The autocomplete beats anything else I’ve tried, and the little UX details around chatting, composing and editing are really nice. Cursor is particularly great when I’m doing repetetive tasks, or working in a technology I am less familiar with.

These are my .cursorrules, in case you’re curious. I played around a good amount with people’s custom prompts and didn’t find them particularly useful, so just created something simple that tells the AI my preferences and helps me avoid repeating myself:

Assume my OS is Ubuntu 22, my preferred languages are Python and JavaScript.
I use TailwindCSS for most of my styling.
I format my Python code with pep8 and black,
and indent my HTML and JavaScript code with two spaces.

There are still two scenarios where I go back to PyCharm:

  1. When I want to poke around or edit my site-packages. It’s possible there’s a way to do this decently in Cursor/VSCode, but I haven’t found it yet.
  2. When I’m writing. The one place I hate Cursor is when I’m typing prose (like now). The autocomplete totally destroys my flow.

I think that’s… it?