Our offices

  • Exceev Consulting
    61 Rue de Lyon
    75012, Paris, France
  • Exceev Technology
    332 Bd Brahim Roudani
    20330, Casablanca, Morocco

Follow us

7 min read - 90-Day AI Upskilling Plan for the Tech Workforce

Workforce Enablement

Most upskilling plans fail for one reason: they confuse learning with consuming.

Teams watch videos, skim docs, and collect “AI tools.” Then nothing changes in delivery.

An upskilling plan that works has to be artifact-driven: ship something small, measure it, and build habits that survive tool changes.

This plan is designed for a mixed tech workforce: developers, QA, DevOps/SRE, PMs, and tech leads in startups, SMBs, and enterprise teams.

What you'll learn

  • What to focus on first (skills that compound across tools)
  • A 90-day schedule split into 3 phases with weekly deliverables
  • Role-based tracks (dev, QA, DevOps/SRE, PM)
  • A copy/paste training plan template you can run with your team

TL;DR

A 90-day AI upskilling plan works when it’s tied to delivery. Split training into three phases: fundamentals (weeks 1-4), build and evaluation (weeks 5-8), and production habits (weeks 9-12). Each week should produce an artifact: a workflow demo, an evaluation rubric, a runbook, or a retrospective. Role-based tracks help developers, QA, DevOps, and PMs learn the same system from different angles.

The principle: learn by shipping, not by collecting tools

Your team does not need to memorize every model name.

They need repeatable skills:

  • turning a workflow into acceptance criteria,
  • writing an evaluation rubric,
  • shipping with guardrails,
  • and operating the system when behavior changes.

The 90-day plan (3 phases)

Each phase has a clear “definition of done.”

Phase 1 (Weeks 1-4): fundamentals and workflow selection

  • Pick 1 workflow the team can improve (support replies, ticket triage, QA test generation).
  • Define baseline metrics (cycle time, defect rate, rework).
  • Learn safe usage: data boundary, what not to paste, secrets handling.
  • Deliverable: a 1-page workflow brief with acceptance criteria.

Phase 2 (Weeks 5-8): build + evaluation

  • Build a thin slice demo (even internal-only).
  • Create a golden set of real examples and a scoring rubric.
  • Add basic observability: logs, cost, latency, and quality checks.
  • Deliverable: evaluation report showing baseline vs current.

Phase 3 (Weeks 9-12): production habits

  • Write the runbook: rollback plan, incident path, and ownership.
  • Establish a change process (what counts as a material change).
  • Train the next group of users and document the SOP.
  • Deliverable: an “operating pack” that survives turnover.

Week-by-week deliverables (so people know what “done” looks like)

If you want the plan to feel real, name the weekly artifact. Here’s a concrete example schedule you can adapt:

  • Week 1: workflow brief + baseline metric + owners
  • Week 2: data boundary notes + “safe usage” rules for prompts/logging
  • Week 3: thin slice prototype (internal-only is fine) + demo
  • Week 4: first golden set + first rubric draft
  • Week 5: evaluation run + before/after diff on failures
  • Week 6: rollout plan + human handoff rules
  • Week 7: monitoring basics (latency/cost/quality sampling) + dashboard notes
  • Week 8: incident drill + rollback test
  • Week 9: SOP for users + “what to do when it’s wrong” guidance
  • Week 10: maintenance owner + change triggers (what requires re-eval)
  • Week 11: second workflow selection (only if week 1 workflow is stable)
  • Week 12: retrospective + updated playbook for the next 90 days

This prevents the most common failure mode: “We trained for 90 days and can’t point to anything that shipped.”

A realistic weekly cadence (what goes on the calendar)

Upskilling sticks when it shows up as a routine, not a “nice to have.”

A cadence that works for most teams:

  • One 30-minute weekly learning session (shared language, one concept).
  • One 60-minute working session (apply it to the target workflow).
  • One weekly demo/review (show the artifact: eval results, runbook update, or thin slice demo).

If you can only do one thing, do the weekly demo. It forces progress and gives leadership something concrete to react to.

How to measure progress (without turning it into surveillance)

Upskilling is not “everyone used the tool.” It’s “delivery improved without adding risk.”

Pick a small set of before/after signals for the target workflow:

  • cycle time (how long from request to usable output)
  • defect rate or rework rate (how often you had to redo outputs)
  • escalation rate (how often humans had to step in)
  • incident count (how often the workflow caused operational pain)

Then pair those with one qualitative check: “What do we now do faster or more safely than 90 days ago?”

If you measure only activity (posts, hours, tool usage), you’ll get busywork. If you measure only outcomes, you’ll miss whether the team built durable habits.

Role-based tracks (same system, different angles)

You want shared language across roles, but different practical tasks.

Developers

  • Build a small feature with an eval harness.
  • Learn guardrails: test generation, linting, and code review prompts.
  • Practice “diff discipline”: make the assistant show work, not just output code.

QA

  • Turn acceptance criteria into test plans.
  • Use AI to draft tests, then validate for flakiness and coverage.
  • Build a regression set that includes AI edge cases (hallucinations, refusals).

DevOps/SRE

  • Add monitoring for latency/cost/quality drift.
  • Define secrets management and logging retention.
  • Build a rollback path and a kill switch.

PMs and product

  • Define the workflow, metric, and “definition of done.”
  • Run stakeholder reviews based on evidence (eval results), not opinions.
  • Keep a decision log and a backlog that prevents scope creep.

Open-source builders (optional track)

If part of your workforce is active in open source, you can channel that energy into useful, reviewable artifacts instead of random tooling experiments.

Good open-source-flavored deliverables:

  • a small internal template repo for evaluation specs and golden sets
  • a reusable “safe usage” policy snippet for prompts/logging
  • improvements to an internal linting/checklist tool that enforces guardrails

The goal is not to “build your own model.” The goal is to make your delivery system more reliable and easier to onboard to.

Copy/paste: a 90-day upskilling plan template

Use this as a team plan and fill it in weekly.

90-day upskilling plan

Target workflow:
Owner:
Baseline metric:

Week 1-4 deliverables:
- Workflow brief + acceptance criteria
- Data boundary rules + safe usage notes

Week 5-8 deliverables:
- Thin slice demo
- Golden set + evaluation rubric
- Baseline vs current eval report

Week 9-12 deliverables:
- Runbook + rollback plan
- Change process + ownership
- Training + SOP for new users

Manager checklist (what leadership must do for this to work)

Upskilling fails when leadership wants results but doesn’t protect time.

If you manage a team, your job is to make the habits possible:

  • protect the weekly demo (no cancellations)
  • pick one workflow and keep it stable for long enough to learn
  • reward evidence (eval results, runbooks, rollback drills), not “AI excitement”
  • stop tool sprawl (one approved stack for the program)

If you do this, people will ship. If you don’t, the plan becomes “learning in evenings,” and it dies quietly.

Common failure modes

  • Training is optional and never scheduled. Fix: put it on the calendar with deliverables.
  • People learn tools, not workflows. Fix: anchor on one workflow and one metric.
  • No evaluation. Fix: require a golden set before you call anything “done.”

Changed delivery habits are the goal

Upskilling success looks like changed delivery habits: better cycle time, fewer defects, clearer ownership, and a team that can evaluate and operate AI workflows without depending on one champion. Need help designing your team's AI upskilling program? Let's talk.

Want your team to level up?

We run hands-on workshops on AI, modern dev practices, and architecture — tailored to where your team actually is.

More articles

Running a Consultancy on Open-Source Business Tools: Our Operations Playbook

How Exceev runs its business operations on Twenty CRM, ZeroMail, n8n automation, Ghost publishing, Cal.com scheduling, and Postiz social publishing. An operations playbook for consultancies that want control over their business stack.

Read more

Self-Hosting Our Infrastructure: The Observability, Security, and Deployment Stack

How Exceev self-hosts its infrastructure with Grafana, Prometheus, Loki, k6, Coolify, Infisical, Docker, Tailscale, Cloudflared, Beszel, and Duplicati. An operational deep dive into observability, deployment, security, and resilience.

Read more

Tell us about your project

Our offices

  • Exceev Consulting
    61 Rue de Lyon
    75012, Paris, France
  • Exceev Technology
    332 Bd Brahim Roudani
    20330, Casablanca, Morocco