Getting Started: AI Coding Quick Reference
📅 Last updated: December 2025
No-nonsense reference for developers who just want to know which AI model to pick. Bookmark this and stop Googling.
Quick pick - just tell me what to use
GPT-4o
Gemini Pro
GPT-4o-mini
Gemini 2.5 Flash
GPT-5
Gemini Pro
GPT-4o or 5
Gemini Pro
GPT-4o
Gemini Pro
GPT-5
Gemini Pro
A note on versions
You’ll see version numbers everywhere: Sonnet 3.5, Sonnet 4, Sonnet 4.5. Gemini 2.5, Gemini 3. GPT-4o, GPT-5.
Don’t overthink it. The tier matters more than the version. “Sonnet” is the mid-tier Claude. “Opus” is the heavyweight Claude. “Flash” is the fast/cheap Gemini. Your IDE usually offers the latest version of each tier - just pick the tier that fits your task.
When this page says “Sonnet”, it means whatever the current Sonnet is. Same for the others.
The big three model families
Speed key: ⚡⚡⚡ Fast · ⚡⚡ Medium · ⚡ Slow
Anthropic (Claude)
| Model | What it’s for | Speed | Cost |
|---|---|---|---|
| Haiku | Fast tasks, scaffolding, CLI | ⚡⚡⚡ | 💰 |
| Sonnet | Everyday coding | ⚡⚡ | 💰💰 |
| Opus | Complex reasoning, design | ⚡ | 💰💰💰 |
OpenAI (GPT)
| Model | What it’s for | Speed | Cost |
|---|---|---|---|
| GPT-4o-mini | Fast tasks, high volume | ⚡⚡⚡ | 💰 |
| GPT-4o | Everyday coding | ⚡⚡ | 💰💰 |
| GPT-4.1 | Complex coding | ⚡⚡ | 💰💰 |
| GPT-5 / 5.2 | Heavy lifting | ⚡ | 💰💰💰 |
| o1 | Deep reasoning (expensive) | ⚡ | 💰💰💰💰 |
| o3 / o4-mini | Reasoning (cheaper) | ⚡ | 💰💰 |
- Implementing algorithms (graph traversal, dynamic programming)
- Debugging race conditions or complex state machines
- Mathematical proofs or formal verification
Google (Gemini)
| Model | What it’s for | Speed | Cost |
|---|---|---|---|
| Gemini 2.0 Flash | Ultra-cheap, simple tasks | ⚡⚡⚡ | 💰 (cheapest) |
| Gemini 2.5 Flash | Fast tasks, high volume | ⚡⚡⚡ | 💰 |
| Gemini 2.5 Pro | Complex reasoning | ⚡ | 💰💰💰 |
| Gemini 3 Flash | Everyday coding | ⚡⚡ | 💰💰 |
| Gemini 3 Pro | Heavy lifting | ⚡ | 💰💰💰 |
Benchmarks
Want numbers?
- Compare all models - sortable table, filter by Copilot cost
- Benchmark details - methodology, sources, caveats
The TLDR:
- Sonnet-class models hit 70%+ on SWE-bench at half the cost of Opus-class
- o1 scores 84% on Aider but costs $1.35/task - 4× more than Sonnet for similar results
- The most expensive model is not automatically the best at coding
Benchmarks are useful for gut-checking, but the real test is running a model on your own work.
Marketing BS decoder
| They say | It means |
|---|---|
| “Most intelligent” | Bigger, slower, pricier |
| “Balanced” | Mid-tier - usually right |
| “Fast” / “efficient” | Smaller, cheaper, simpler |
| “Reasoning” / “thinking” | Extra thinking time - see below |
| “Preview” / “experimental” | Unstable - skip it |
| “200K context” | Can see lots of code - but should it? |
When do “reasoning” models actually help?
Reasoning models (o1, o3, “thinking” variants) work through problems step-by-step before responding.
Worth it for:
- Implementing complex algorithms (A*, red-black trees, constraint solvers)
- Debugging concurrency issues, race conditions, deadlocks
- Untangling deeply nested dependency chains
- Mathematical proofs or formal logic
Overkill for:
- Adding a new API endpoint
- Fixing a null pointer exception
- Writing unit tests
- Refactoring for readability
- Most day-to-day feature work
A standard model with a good prompt is faster and cheaper for 90% of coding tasks.
What about context window size?
Context window (what’s this?) = how much code the model can “see” at once. Bigger sounds better, but:
- More context = more noise. The model gets distracted.
- More context = slower and pricier. You pay per token.
- You rarely need it. Most tasks involve a few files, not hundreds.
Big windows help for: exploring unfamiliar codebases, analysing logs, multi-file refactors. For everyday coding, focused context beats massive context.
The starter prompt
Before judging any model, constrain it:
Make the smallest possible change that satisfies the request.
Do not refactor unrelated code.
Do not invent APIs, types, or behaviour.
Explain what you changed and why.
This turns raw capability into restraint - which is what you actually want for production code.
Sources
- Anthropic Claude models
- OpenAI models
- Google Gemini models
- Aider Leaderboards
- SWE-bench
- Chatbot Arena
Bookmark this. Come back when confused. Updated monthly.