Choice map

Tools and models

intermediate 28 min read

In 30 seconds

Understand the difference between tools, products, and models so you can choose IDEs, CLIs, cloud agents, and LLMs with intent.

Two different choices

When someone asks “which AI should I use for programming?”, they are usually mixing two decisions:

  1. Tool: where you work. It can be an IDE, a CLI, a cloud app, an editor plugin, or a prompt-to-app platform.

  2. Model: the brain generating the answer. It can be GPT, Claude, Gemini, Kimi, Llama, Gemma, DeepSeek, GLM, or another model.

These layers combine, but they are not the same thing. Cursor and Claude Code are tools. GPT-5.3-Codex, Claude Sonnet 4.6, Gemini 2.5 Pro, and Kimi K2.6 are models. ChatGPT, Claude, Gemini, and Kimi are products that can embed different models underneath.

The simple rule: product and tool are where you interact. Model is who generates or decides the next step.

Three work surfaces

New tools appear every week promising to revolutionize how you code. Instead of listing everything that exists, think in three surfaces:

  1. AI-native IDEs: code editors with AI built into the editing experience. Examples: Cursor, Windsurf, VS Code with GitHub Copilot, Google Antigravity.

  2. AI CLIs: command-line tools that read files, run commands, and make changes directly in the terminal. Examples: Claude Code, Kimi Code, Codex CLI, OpenCode, GitHub Copilot CLI.

  3. Cloud apps and agents: you send a well-described task, the tool works in an isolated environment, and it returns a diff or pull request. Examples: Codex App, Google Jules, Devin.

The boundaries are getting blurry. Kimi Code, for example, can show up in the terminal, a local browser, VS Code, and ACP-compatible IDEs. The category still helps, but the main question is not “which tool is best?”. The question is: which surface removes the most friction for this task?

What changes between models

You do not need to become an AI researcher to work well with LLMs. But a few concepts change the quality of your decisions:

Context

Models process text as tokens, and each model has a context window. In 2026, windows range from ~128K tokens to ~1M+, but a medium software project can contain much more than that. Although context helps, it does not remove the need to choose relevant files, examples, and constraints.

Reasoning, speed, and cost

Larger models often reason better through long tasks, but they are usually slower and more expensive. Using the strongest model for everything can be wasteful. A quick exploration, an autocomplete, and an architecture plan do not need the same amount of power.

Tool use

Some models are tuned to work well with tools: reading files, calling APIs, running commands, observing results, and deciding the next step. That matters a lot for coding agents, because quality is not only in the generated text. It is in the ability to keep following a task across several steps.

Real limits

Every model hallucinates, has a knowledge cutoff, and can generate code that looks correct but solves the wrong problem. A benchmark is a signal, not an oracle. A ranking measures standardized tasks; your project has its own context, history, constraints, and debt.

Task profiles, not rankings

There is no “best model”. There is the right model for the right task.

Exploring ideas: fast and cheap models are usually enough when the goal is to brainstorm, generate alternatives, and surface better questions — not to produce final code.

Writing and reviewing everyday code: balanced models, such as Claude Sonnet or specialized coding models inside IDEs, tend to deliver good cost-effectiveness.

Planning architecture or large refactors: models with deeper reasoning and wide context windows make more sense, especially when they need to consider many modules at once.

Analyzing long documents or many sources: models with large context and strong synthesis, such as Gemini Pro or Claude Opus, can outperform models focused purely on code editing.

Privacy, cost, or control: open-weight or open-source models, such as Llama, Gemma, DeepSeek, and GLM, come in when the team needs more control over hosting, cost, privacy, or customization.

Takeaway

  • Separate tool, product, and model before comparing options.
  • Choose the tool by workflow friction and the model by task difficulty.
  • Use benchmarks as a signal, not a final answer.
  • Treat every response as a draft until it has passed review and validation.
  • The differentiator is not “using the most powerful AI”; it is building a combination that helps you deliver software with more clarity and control.
Next session Professional flow

SDD and Harness Design

Keep going

Community

Ask, answer, get unstuck

Use this space to ask questions about the session, share examples, and help other people understand the topic.