Our offices

  • Exceev Consulting
    61 Rue de Lyon
    75012, Paris, France
  • Exceev Technology
    332 Bd Brahim Roudani
    20330, Casablanca, Morocco

Follow us

7 min read - 2-Week AI Discovery Sprint for Founders Outside Tech

Discovery and Scoping

If you're a founder outside tech, “do something with AI” can feel like a trap.

You can spend weeks comparing tools, collect ten demos, and still have no answer to the only question that matters: what workflow are we changing, and how will we know it's working?

That's why a discovery sprint is useful. It forces focus. Two weeks is long enough to validate feasibility and short enough to avoid building the wrong thing.

What you'll learn

  • How to pick a workflow that is “AI-shaped” (and avoid the ones that aren't)
  • Who needs to be involved so you don't get blocked by security or operations later
  • A day-by-day agenda for a two-week sprint
  • The deliverables you should expect at the end (memo, backlog, risks, next step)
  • A copy/paste discovery brief you can use with your team or a consultant

TL;DR

A two-week AI discovery sprint helps non-technical founders avoid tool-chasing by forcing a clear workflow, data boundary, and success metrics upfront. In 10 working days, you can run a lightweight evaluation on real examples, map risks (security, quality, adoption), and finish with a delivery backlog and a go/no-go decision for a build sprint.

What this sprint is (and what it is not)

This is not a hackathon, and it's not a vendor selection project disguised as “strategy.”

It is a short, structured process to answer:

  • What workflow are we changing?
  • What data can we use (and what data is off-limits)?
  • How will we measure quality and failure?
  • What should we build first?
  • Who owns it after launch?

If you can answer those, you can fund the next step with confidence.

Who needs to be in the room (so you don't stall on week 3)

Founders often try to run discovery with only “product + an engineer.” It works until it doesn't.

Minimum roles for a cross-industry founder team:

  • Workflow owner: the person who feels the pain daily and can validate outputs.
  • Data owner: someone who knows where the data lives and what can be shared.
  • Security/compliance input (even lightweight): to avoid rework on boundaries.
  • Delivery lead (internal or external): to translate decisions into a backlog.

If you don't have a dedicated security person, you still need someone to say “yes, this data is allowed” or “no, this crosses a line.”

The interview questions that make discovery useful

Discovery goes wrong when everyone stays at the “AI strategy” level. You want operational detail.

Questions that work well with non-technical teams:

  • “Show me the last 10 examples of this workflow.” (real tickets, real documents, real emails)
  • “What does a perfect output look like?” (format, tone, required fields)
  • “What would be a dangerous mistake?” (wrong policy, wrong customer, wrong number)
  • “When it’s not perfect, what do people do today?” (manual steps, escalation paths)
  • “What data are we absolutely not allowed to touch?” (PII, contracts, financials)

If you can’t answer these, you don’t have enough clarity for a build sprint. That’s fine. It just means discovery is doing its job.

The 2-week agenda (10 working days)

Use this as a day-by-day guide. Adapt it, but keep the timebox.

Days 1-2: Pick the workflow and define “good”

  • Choose one workflow with clear value (support replies, document intake, internal search).
  • Write acceptance criteria: what is a good output and what is unacceptable.
  • Decide the metric (cycle time, deflection, error rate, time-to-decision).

Days 3-4: Map the data boundary

  • List data sources and update frequency.
  • Identify sensitive classes (PII, customer data, regulated docs).
  • Decide what can be logged and retained.

Days 5-7: Build a tiny evaluation set

  • Collect 30 to 100 real examples (or synthetic equivalents if necessary).
  • Score what “good” looks like (rubric or labels).
  • Define a minimum threshold to ship a pilot.

Days 8-9: Sketch the first build sprint

  • Decide architecture direction (often RAG first for knowledge problems).
  • Create the backlog for a 2-week build sprint.
  • Identify dependencies and decisions (access, tools, owners).

Day 10: Go/no-go and next step

  • Present findings and risks.
  • Decide: build sprint, more discovery, or stop.
  • Assign an owner and a date for the next checkpoint.

What to do with “maybe” results (the honest path)

Not every sprint ends with “yes, build it.” Sometimes you find:

  • data is too messy right now
  • the workflow is valuable but the integration surface area is too big
  • quality is achievable, but only with a narrower scope

That’s still success if you can state the next move clearly:

  • narrow the workflow (draft-only instead of automation)
  • fix one upstream data problem before building
  • run a second discovery focused on permissions and compliance

The expensive mistake is pretending you got a green light when you actually got uncertainty.

What you should have at the end (deliverables)

A discovery sprint is successful if it produces artifacts you can reuse:

  • A 1-page decision memo (workflow, boundary, approach, risks)
  • A prioritized backlog for sprint 1 and sprint 2
  • A lightweight evaluation plan (golden set + thresholds)
  • A risk register (security, adoption, reliability) with owners

The day-10 readout (keep it short and decisive)

Founders often end discovery with a vague meeting. Don’t. End with a clear readout that makes the next step obvious.

A simple structure:

  • What we learned (workflow, constraints, feasibility)
  • What we built or tested (evaluation baseline, example outputs)
  • Risks we uncovered (and the one biggest unknown)
  • Recommendation: build sprint / more discovery / stop
  • If build: a 2-week backlog and the owners for each decision

If the sprint doesn’t produce a recommendation you can act on, it wasn’t discovery. It was conversation.

Copy/paste: the discovery brief template

Use this brief with your internal team or a consultant. It keeps the sprint grounded.

Business context (2-3 sentences):
Workflow to improve:
Who uses it today:
What “good” looks like:

Constraints:
- Data that must never leave:
- Compliance requirements:
- Latency expectations:
- Budget/timebox:

Data sources:
- Source 1:
- Source 2:
- Update frequency:
- Access control model:

Evaluation:
- Example set owner:
- Scoring approach:
- Ship threshold:

Decision checkpoint:
- Who decides:
- By when:

If you're leading the sprint, the weekly update script is simple: what we learned, what we shipped (if anything), what is blocked, and what decision we need.

Common traps (and how to avoid them)

  • Starting with the tool. Mitigation: start with workflow + metric, then pick tooling.
  • No data owner in the room. Mitigation: name one person accountable for data access decisions.
  • No threshold for “good.” Mitigation: create a small eval set and agree on minimum quality.
  • Treating discovery as delivery. Mitigation: keep it short and ship a backlog, not a product.

For cybersecurity-sensitive workstreams, add threat modeling, secrets management, and vendor risk review before you scale.

Discovery that produces decisions, not meetings

A discovery sprint works when it is short, opinionated, and artifact-driven. Two weeks should leave you with a decision memo, an evaluation plan, and a backlog you can fund. If it doesn’t, you didn’t run discovery — you ran meetings.

The win is clarity: what you’re building, what you’re not building, and what would have to be true for the project to scale safely. Need help running a discovery sprint? Let’s talk.

Thinking about AI for your team?

We help companies move from prototype to production — with architecture that lasts and costs that make sense.

More articles

Running a Consultancy on Open-Source Business Tools: Our Operations Playbook

How Exceev runs its business operations on Twenty CRM, ZeroMail, n8n automation, Ghost publishing, Cal.com scheduling, and Postiz social publishing. An operations playbook for consultancies that want control over their business stack.

Read more

Self-Hosting Our Infrastructure: The Observability, Security, and Deployment Stack

How Exceev self-hosts its infrastructure with Grafana, Prometheus, Loki, k6, Coolify, Infisical, Docker, Tailscale, Cloudflared, Beszel, and Duplicati. An operational deep dive into observability, deployment, security, and resilience.

Read more

Tell us about your project

Our offices

  • Exceev Consulting
    61 Rue de Lyon
    75012, Paris, France
  • Exceev Technology
    332 Bd Brahim Roudani
    20330, Casablanca, Morocco