Skip to content

Runtime

This section reads CPython as an execution engine. It connects frames, bytecode, the object model, memory management, the GIL, and subinterpreters into one usable picture for performance and architecture decisions.

Quick takeaway: once you have runtime intuition, you can replace vague claims like "Python is slow" with concrete ideas such as attribute-lookup cost, reference counting behavior, specialization stability, GIL boundaries, and GC pause tradeoffs.

Questions This Part Answers

What does "everything is an object" cost?

Object headers, type objects, and slot dispatch explain a lot of attribute-heavy and method-heavy behavior.

When is memory released immediately?

CPython uses reference counting for many immediate releases and a separate GC pass for cycles.

What exactly does the GIL restrict?

This is the key to choosing between threads, async tasks, processes, and subinterpreters.

Why did Python 3.11+ get faster?

Adaptive specialization changes the cost model of hot bytecode paths.

How deep should we go into internals?

Use frame/code object, refcount+cycle GC, and `dis`/`ast`/`tracemalloc`/`gc` labs to build layered runtime intuition.

  1. Execution Model
  2. Object Model
  3. Memory and GC
  4. GIL and Subinterpreters
  5. Bytecode and Specialization
  6. CPython Internals Advanced
  7. CPython vs Go Runtime

Built with VitePress for a Python 3.14 handbook.