Daniel Carral=> The Future of Work. NOW.
AI & Creativity10 min read

You're (Still) Using AI (Very) Wrong

Three years into the AI revolution, the same mistakes keep costing teams hours every week. These mindset shifts are the difference between filler and work that actually matters.

AICreativityProductivityFuture of WorkPrompt EngineeringLeadership
Click to scroll

Why Read This

Most people treat AI like a vending machine: put in a question, get out an answer, move on. The output is generic, the team spends hours polishing it, and nobody questions whether the process itself is broken.

I wrote this because I keep seeing the same three mistakes. This article breaks down where the defaults go wrong and introduces three shifts (plus a concrete framework you can steal) that changed how I collaborate with AI daily.

If you have ever looked at an AI output and thought “this is fine, I guess,” this is for you.

Reading time

10 min

🎙 Listen & Watch

Audio Summary

NotebookLM podcast, 6 min

Coming soon

Video Briefing

Visual overview, 3 min

Coming soon

Quick Brief

1-page cheat sheet

Coming soon

Introduction: The High Cost of “Good Enough”

You ask a for “five marketing ideas,” and it hands you a list so generic it could have been scraped from a 1995 business textbook. I call this “workslop”: AI-generated output that feels productive but creates a hidden tax on your team. At its mildest, workslop costs 3–5 hours per week in editing. At its worst, leaders base critical business decisions on unvetted AI output, dismiss domain experts who push back, and shut down conversations that should have happened.

The root cause is not the tool. It is a collision between two forces:

  • On our side: “satisficing”, a concept from Nobel laureate Herbert A. Simon, describing our instinct to accept the first “good enough” answer.
  • On the AI's side: , the well-documented tendency to agree with users rather than challenge them.

Together, these form a feedback loop: we settle for the first output, and the AI eagerly confirms it was brilliant. The result is a false sense of validation that can be more dangerous than having no AI at all.

A leader who uses AI to confirm their existing assumptions is not augmenting their thinking. They are reinforcing their blind spots with a tool that sounds like an expert.

On AI sycophancy and leadership

The three mindset shifts that follow replace passive consumption of AI output with active, critical collaboration. They are your toolkit for overriding this instinct.

Most people stop at the first output. Overriding the satisficing instinct opens the iterative loop where genuine insight emerges.

Shift 1: From “Question & Answer” to “Iterative Sparring”

The biggest leaps in my own work never happen on the first exchange. They happen on the third, the fourth, sometimes the sixth. A single query is satisficing in its purest form: one question, one answer, done. The new rule is simple: the first output is never the final product. It is the starting point for a dynamic rally where each exchange sharpens the idea.

Consider those “five marketing ideas.” Instead of accepting the list, try these follow-ups:

  • “Challenge the key assumption here. What if our eco-conscious audience actively dislikes typical fitness influencers?”
  • “Now, argue for the opposite perspective. Why might a purely digital strategy fail for this demographic?”
  • “Combine the ideas of ‘community events’ and ‘sustainability’ into a single, novel offline campaign concept.”

This is strategic de-risking. In a traditional setting, pressure-testing an idea costs meeting time, consultant fees, or a failed pilot. With iterative sparring, you can stress- test ideas for free before a single dollar is spent on execution.

Pro Tip: When to stop iterating? Stop when you can clearly articulate why this approach will work, what could go wrong, and what you'd do differently. If you can't answer all three, keep sparring.

I saw this firsthand when working with a hospitality client on their Norwegian glamping site. Instead of simply asking the AI to “write a welcome email,” we sparred back and forth. The result wasn't an email at all; it was a comprehensive Welcome Page & FAQ that reduced guest inquiries and improved the arrival experience. Read the full case study.

Shift 2: From “Vague Instruction” to “Expert Delegation”

You would never walk up to a brilliant new hire and say “promote this shoe” with zero context. You would brief them: audience data, brand guidelines, a clear objective. Yet this is exactly how most people interact with AI. The fix: treat the AI like an expert you are delegating to, using the RCGI Framework: Role, Context, Goal, Interview.

R = Role

Tell the AI who it is. "You are a senior brand strategist with 15 years of experience in sustainable consumer goods."

C = Context

Provide essential background. "Our company just launched a running shoe made from 100% recycled ocean plastic. Our target audience is environmentally-conscious millennials."

G = Goal

State the desired outcome. "Develop three distinct campaign concepts for a social media launch."

I = Interview

Command the AI to ask you questions before it starts. "Before you begin, ask me any clarifying questions you need to do your best work."

Before: Vague Prompt

“Give me some marketing ideas for our new shoe.”

Result: a generic list that could apply to any product in any industry. You spend the next hour rewriting it from scratch.

After: RCGI Prompt

“You are a senior brand strategist with 15 years of experience in sustainable consumer goods. Our company just launched a running shoe made from 100% recycled ocean plastic. Our target audience is eco-conscious millennials on Instagram and TikTok. Develop three distinct campaign concepts. Before you begin, ask me any clarifying questions.”

Result: three distinct concepts tailored to your audience, plus follow-up questions that sharpen your own thinking.

The “Interview” step is the secret weapon. The AI might come back and ask: “What emotion should the campaign evoke: urgency around the climate crisis, or joy and empowerment?” or “Who are the main competitors we need to differentiate from?” or “What is the single most important feature of the shoe to highlight?” By forcing the AI to ask them, you force yourself to think more clearly about what you actually need.

This is how we built custom AI tools for The Simba Project. The RCGI framework shaped our process: clearly defining the role, context, and goals for each tool, and iterating through the “Interview” phase, we went from concept to a working in under two weeks. Read the full case study.

Pro Tip: Cross-Platform Validation. One of my favorite techniques: take the output from one AI platform and paste it into a different one with instructions to critique it. Different models have different training data, different biases, and different blind spots. You will be surprised how often the second model catches something the first one missed.

Try this prompt (it works better than you might expect):

“Please review and harshly criticize the following content, as if you were a Soviet Ballet Instructor.”

For an even more systematic approach, add this to your system instructions or custom instructions:

“After every response, generate two additional sections. First, a harsh self-critique of the response. Second, a section specifically surfacing blind spots, overlooked considerations, and alternative approaches that were not explored.”

You will be surprised how often the AI catches its own mistakes when explicitly asked to look for them.

Shift 3: From “Task Automation” to “Capability Augmentation”

Most people use AI to do old tasks faster: drafting emails, summarizing documents, generating boilerplate. That is valuable, but it is buying a racing car and only driving it to the supermarket. The true ROI is breakthrough ideas: the ability to think in ways you could not before. This requires a two-phase approach:

Phase 1: Divergent Thinking

Ask the AI to generate 20 wildly different ideas, explicitly ignoring all constraints: budget, feasibility, convention. The goal is raw creative volume. Let the AI go wide.

Phase 2: Convergent Thinking

Now, bring your human judgment. Group the ideas into themes. Select the three strongest. Ask the AI to build a mini-business case for each, including risks and resource requirements. This is where you filter the signal from the noise.

Go wide first (divergent), then apply human judgment to select the strongest ideas (convergent). The AI generates volume; you provide direction.

What the machine cannot do is decide what matters. It can generate, analyze, and synthesize at remarkable speed. But the judgment to direct it, to decide which questions to ask, which ideas to pursue, and when to change course, that is still on you.

This is how Sophia Nexus was born. Through divergent thinking, we generated over 30 potential features and concepts. Then, through convergent analysis, we selected the 3 highest-impact features to build the MVP around. Read the full case study.

From theory to architecture: These three shifts are baked into my Human↔GenAI Co-creation Ecosystem: 10 specialized applications that implement these collaboration patterns at every layer, including a coaching tool that analyzes your AI conversations and surfaces what worked, what did not, and how to improve.

Conclusion: From Individual Skill to Team Superpower

Let's recap. The path from generic AI output to genuine creative partnership requires three conscious overrides of our satisficing instinct:

  1. Iterative Sparring: Don't accept the first answer. Use it as the starting point for a dynamic conversation to de-risk your ideas.
  2. Expert Delegation: Brief the AI like a senior hire using the RCGI framework. The quality of what you put in sets the ceiling for what comes out.
  3. Capability Augmentation: Stop using AI just to do old tasks faster. Use it to think in new ways: diverge wildly, then converge strategically.

These shifts work well on your own. The real leverage comes when they become a shared practice across a team. That is exactly what the AI Co-Creation Lab is designed for: a half-day, hands-on workshop where your team learns and practices these mindset shifts together.

Key Takeaways1 / 5

Generic AI output ("workslop") stems from satisficing: our instinct to accept the first good-enough answer.

If any of these ideas changed how you work with AI, I would genuinely love to hear about it. What shifted? What did you try? These conversations are often the most rewarding part of writing. Drop me a message or reach out at dan@dcarral.org.

Ready to Put These Shifts Into Practice?

Whether you want to explore these ideas 1:1 or bring them to your team, let's talk. Book a free discovery call or drop me a message.