In the past two years, “prompt engineering” has grown into a discipline of its own. Users of GPT-3.5 or GPT-4 quickly discovered some “magic tricks” to make AI reason more reliably. Two of the most famous were Chain of Thought (CoT) and Tree of Thought (ToT).

The first asked the model to reason step by step, laying out each part of its thinking before giving an answer. The second went further: instead of following a single line, it encouraged the model to explore multiple branches — like a decision tree — before selecting the best one.

These techniques really boosted earlier models’ performance. But with GPT-5, the picture has changed.


GPT-5 already reasons by default

GPT-5 has native multi-step reasoning abilities. Even without special instructions, it can analyze complex problems, test alternative solutions “internally,” and converge on a coherent answer.

That doesn’t mean CoT and ToT are obsolete. They remain useful in certain contexts:

  • Math or logic problems, where step-by-step rigor is essential.
  • Project planning or programming, where exploring scenarios can prevent dead ends.
  • Cases where you want to see the logic the model followed, not just the final result.

In short, CoT and ToT have shifted from being performance hacks to being tools for control and transparency.


New priorities: nudges and verbosity control

What really matters with GPT-5 are two other techniques: router nudges and verbosity control.

router nudge is a small phrase in your prompt that steers the model’s “mode of thought.” For example:

  • “Think hard about this.”
  • “Explain as if you were talking to a busy CEO.”
  • “Analyze this like a legal expert.”

The model doesn’t change its knowledge, but it adjusts its reasoning style, tone, and depth. It’s like choosing a different pair of glasses to look at the same scene.

Verbosity control sets the level of detail you want. It prevents vague answers on one hand, and overly long walls of text on the other. Examples include:

  • “Summarize in three points.”
  • “Keep it under 200 words.”
  • “Explain step by step, but concisely.”

Why this changes everything

The real power comes from combining the two:

  • The nudge defines the angle and depth of the answer.
  • Verbosity control defines the size and density.

Ask “Explain Bitcoin” and you’ll get a generic, medium-length reply.
Ask “Explain Bitcoin to a financial journalist in 5 bullet points” and you’ll get something sharp, targeted, and directly usable.


The future of prompting

In short, GPT-5 reduces the need to force AI into explicit step-by-step reasoning. Chains of thought (CoT) and trees of thought (ToT) aren’t gone, but they’ve become specialized tools — most useful in high-complexity or high-transparency contexts.

For everyday use, the key is no longer to “drag the model along” into reasoning. The key is to precisely guide its response style and level of detail.

The future of prompting isn’t longer chains of thought.
It’s sharper guidance, tighter control, and better alignment.

Less chaining. More mastery.

Similar Posts

Laisser un commentaire