From the book Vibe-Coding: The Art of Collaborating with AI
A structured approach to human-AI collaboration
that produces better results, first time.
Most poor AI results stem from unclear requests, not AI limitations.
Vague inputs produce vague outputs.
The AI cannot read your mind. It cannot infer unstated requirements. It cannot know what "good" looks like unless you tell it. The bottleneck is not the AI's capability — it's your ability to articulate what you want.
ICE provides a structure for that articulation.
ICE structures the planning conversation that precedes implementation. It's not a prompt template — it's a framework for dialogue.
What you want to achieve and why
Surfaces: Purpose, context, user situation
Boundaries and requirements
Surfaces: Technical limits, business rules, design principles
How you will verify success
Surfaces: Acceptance criteria, what "right" looks like
Intent is the outcome you're trying to accomplish, including the context that makes it meaningful.
"Add a button to the page."
"Users need to reach their settings quickly without losing their place — they're often mid-task and can't afford to navigate away. Something always visible that says 'your preferences are here.'"
Intent tells why, not just what. This gives the AI information to make good implementation decisions.
Constraints narrow the solution space. They have dual nature:
Prevents unwanted solutions
Forces discovery of opportunities within the fence
You don't need technical vocabulary. Express what matters:
Expectations are how you'll verify success — what you'll look for when you test it.
"I'll know it's working when I can click the icon, see my settings, change something, and find that change still there tomorrow."
The discipline: Force yourself to describe what success looks like before building.
Technique: Ask the AI to wireframe the interface before building. Wireframing surfaces expectations you didn't know you had.
AI systems are trained to be helpful and agreeable. This creates sycophancy bias — the AI wants to build what you asked for, even if what you asked for has flaws.
The solution: after completing ICE, ask the AI to adopt a critical persona that actively looks for problems.
"Now review this plan as a [critical persona]. What have we missed? What could go wrong? What assumptions are we making that might be wrong?"
UX problems, confusing flows
Vulnerabilities, data exposure
Weak arguments, unstated assumptions
Failure modes, unusual inputs
Hidden expenses, scope creep
Code that will be hard to change
The AI shifts from "how do I build this?" to "what's wrong with this?" — surfacing considerations that agreeable mode would miss.
Effective interloquial communication is two-way. Ask the AI to question you. Meaning emerges through exchange.
AI partners do not reward warmth. They reward precision. Build clarity, not rapport.
AI confidence doesn't correlate with accuracy. Verify independently, especially for high-stakes decisions.
Can you recognise when AI output has subtle flaws? Interloquial competence requires both directions.
State what and why
"What do you need to know?"
Boundaries and limits
What success looks like
"Review as a sceptical [persona]"
Document before building
Build from the blueprint
Check output against expectations
Download the complete ICE toolkit: comprehensive reference guide, ready-to-use system prompts for any AI platform, and conversation starters you can use immediately.
The ICE framework is one piece of a larger methodology. Vibe-Coding: The Art of Collaborating with AI covers: