
From DreamFlow
Description
🔥 Stop telling your coding agent to be “95% confident.” It can’t measure that—and it hurts results. In this Dreamflow Tips episode, I show a research-backed workflow that improves code quality, boosts visual fidelity, and cuts token/credit spend by making the model ask clarifying questions before it writes any code. 🧠 TL;DR: Clarify → Then Code. Confidence thresholds ≠ correctness. 🔥 Get Started https://app.dreamflow.com What you’ll learn ✅
Why self-reported “confidence” ≠ correctness (the calibration problem)
A simple clarify-then-code prompt you can reuse
Two demos: complex UI + real bookmarking feature
How clarifying questions reduced token/credit usage 📉
Chapters ⏱️ 0:00 Intro 0:23 Why “ask clarifying questions” works 1:35 Evidence from recent studies 3:01 Why the “95% confident” rule fails 4:02 Before/after: one-shot UI vs clarify-then-code 6:21 Cost & fidelity improvements (credits/tokens) 7:51 Real-world demo: bookmarking/favorites feature 10:02 Summary + copy-paste prompt 10:05 Outro Copy-paste prompt 💬 “Before writing code, analyze my request. If there’s any ambiguity about the best way to accomplish the task, ask clarifying questions. Do not proceed until I answer.” Why it matters 🎯 Senior engineers de-risk ambiguity first. Making your agent do the same yields tighter scope, fewer wrong turns, and better final code. If this helped, like, subscribe, and drop your favorite clarifying question patterns in the comments! 🙌 📋 Studies Curiosity by Design: An LLM-based Coding Assistant Asking Clarification Questions https://arxiv.org/abs/2507.21285 Large Language Models Should Ask Clarifying Questions to Increase Confidence in Generated Code https://arxiv.org/abs/2308.13507 Calibration and Correctness of Language Models for Code https://arxiv.org/abs/2402.02047
Summary
This video provides evidence-based tips for prompting AI coding agents.
The main takeaways are:
Do instruct your AI coding agent to ask clarifying questions to resolve ambiguities before writing code (0:42, 1:33, 9:43). This leads to more accurate and helpful code, and often uses fewer "credits" or tokens (6:41, 9:17). This approach is supported by studies that show human engineers also ask clarifying questions before coding.
Do Not ask the AI to rely on an internal confidence score (like "95% confident") (3:07, 9:54). Research indicates that an AI's self-reported confidence is a poor predictor of actual code correctness and is not a reliable metric.