Driver development plan
How to build a driver development plan.
Aimless practice is the default state. A development plan is the alternative — a structured feedback loop anchored on five framework verbs (diagnose, prescribe, execute, measure, adapt) and three cadences (per-session, weekly, event-cycle). The article walks through what a plan is, the framework, the cadences, how LAP automates each step, and a DIY template that runs on any telemetry tool you already own. The framework belongs to the reader regardless of which tool fills the rows.
The question
This article opens with two readers who arrive here from different routes.
The first is the serious sim racer who has trained hundreds of hours, has plateaued at a specific licence class, and is wondering why more practice is not producing more progress. They suspect the issue is not effort but structure. They are right.
The second is the track-day driver who runs four to eight events a year and wants to compress each weekend’s learning into the next event, not lose half the lessons between events. The work between sessions is what most determines the next session — and most drivers do that work in their head, not on paper.
Aimless practice is the default state. The driver logs seat time, notes that times got better or worse, and moves on. A development plan is the alternative — a structured feedback loop tied to data, anchored on five framework verbs: diagnose, prescribe, execute, measure, adapt. The article walks through what a plan IS, the framework, the cadences that anchor it on a calendar, how LAP automates each step, and a DIY template for readers who use a different telemetry tool entirely. The DIY template is in Section 6, and the existence of that section is the trust signal — the framework belongs to the reader regardless of which tool they pay for.
What a development plan is
The default state of practice is aimless. Most drivers, sim or real-world, log seat time without a structure: they drive, they note that times got better or worse, they move on. After a hundred sessions of this, the patterns are no clearer than they were after the first ten. Notes were never written. Drills were never named. The session that should have proven a hypothesis instead became another data point in the same fog. The frustration is not a character flaw. Aimless practice produces the symptom — the plateau, the lap-time variance, the inability to say what this week’s session was supposed to teach — because it lacks the three properties that make practice productive.
A development plan has all three. It is tied to data, not to mood: every focus comes from the trace, not from the feeling that “the rear felt loose.” It is structured by a framework — diagnose, prescribe, execute, measure, adapt — that names the steps and forbids skipping any of them. And it is reviewed on a cadence: per-session, per-week, and per-event-cycle, with a written record at every level so the session of week six can argue with the session of week one. The coaching library is where rule-based prescriptions live in LAP; for a non-LAP driver the role is the same, and every development plan needs a credible source for what to drill and why.
The doctrinal point is simpler than the framework. LAP’s internal commitment for shipping a coaching insight is that the loop closes — observation, evidence, likely cause, prescribed action, measurable success condition, review. We hold the same standard for our coaching as we hold for your practice. If the loop is open at any step, the insight is not ready to ship and the development plan is not ready to execute. The remaining five sections walk through how to close the loop on your own driving.
The framework
Five verbs. Each one is a discrete decision the driver makes at a different point in the practice cycle. Skipping any one of them drifts the loop open; cycling through all five closes it.
Diagnose. Read the data after the session, in front of the trace, never in the seat. The diagnose step asks one question: what is the weakness SHAPE? Not “I am slow at Sebring T1,” which is a symptom; instead “my brake-pressure trace shows a stab pattern with three inflection points where my reference shows a single continuous taper” — that is a shape. Shapes recur. The same brake-release shape that shows up at Sebring T1 will show up at Spa Les Combes and Suzuka Spoon, in sim and in real-world capture alike — our cross-platform bridge is what unifies both surfaces under one diagnose step. The trail-braking explainer walks through one shape end-to-end.
Prescribe. Pick exactly one thing to drill, with a success criterion that is observable and measurable. Not “smooth it out”; “achieve a single-derivative brake-pressure trace through release with two-tenths of a second of overlap on steering input by Thursday’s session.” The prescription is rule-based: it draws from a coaching library of named drills, and every prescription traces to a written rule that a human author edited and evidence-backed by hand — never to a language-model output. If you cannot state the success criterion in one sentence, the prescription is not yet specific enough to drill against.
Execute. Run the drill in the next session. One corner, one repetition pattern, one focus. The execute step is the only step in the loop where you are in the seat — diagnose, prescribe, measure, and adapt all happen at the desk. Capture is offline-first: the data lands locally even if your network drops mid-session, and syncs when the connection returns. Five repetitions of one drill produce better data than fifty laps of unfocused running, because the metric you are measuring is the same five times over.
Measure. Did the success criterion land? The metric must be observable and quantifiable. “Felt better” is not a measure; “brake-pressure trace overlap with steering input crossed 0.20 seconds on three of five attempts” is. Three of five matters more than five of five here — consistency under varied conditions, not perfection in clean ones, is what the metric should capture. The driver progression view plots the metric over time so the answer is visible at a glance. If you cannot measure the prescribed change, the prescription was malformed — go back to the prescribe step and rewrite the success criterion before re-executing.
Adapt. Three branches. If the criterion landed, advance to the next prescription in the progression — the next drill that builds on what you just proved. If it landed only partially, repeat the same prescription with a refinement to the technique, not to the metric. If it did not land at all, re-diagnose: the prescription may have been treating a downstream symptom of a deeper weakness shape that the original diagnose step missed. The adapt step closes the loop and feeds the next diagnose. The framework is sound when it cycles, weak when it stalls, and broken when it skips.
Cadences
A framework without a calendar drifts into theory. Cadences anchor the five verbs to time horizons: real-time inside the session, week-over-week between sessions, and event-cycle across the four to six weeks before a race weekend or track day. Each cadence runs the same loop at a different granularity.
The per-session loop. Diagnose, prescribe, and measure happen inside one session, with adapt deferred to the post- session review. On the desktop client that surfaces as live coaching prompts: a weakness shape recurs in the trace, the rule library matches it, and the prompt arrives mid-session — the driver corrects on the next lap rather than waiting for the post-session debrief. The per-session cadence is the only cadence the driver does not have to consciously think about. It runs on the live data. Real-time coaching does not replace the slower cadences; it shortens the diagnose-to-execute gap on the most repetitive weakness shapes.
The weekly review. Aggregate the week’s sessions. Pick ONE focus for next week. The weekly cadence is where individual sessions become a progression: week N’s measure step hands its result to week N+1’s diagnose step. Without it, every session is its own island and improvement is a matter of luck. With it, the driver can argue with their own past — the brake-release shape they fixed in week three should still hold in week six, and the driver progression view is where the argument happens. The weekly review takes thirty minutes if the data is already aggregated; longer if it is not. The decision granularity is one focus per week — not three, not five. A week’s progression is the difference between two diagnoses: what was wrong on Monday, what is wrong now.
The event-cycle. Four to six weeks of structured build-up to a target event. On the sim side, the target is often an iRacing weekly race series finale or a season championship round. On the real-world side, it is a track day, a club race weekend, or a karting endurance event. The mobile app handles the real-world execute path; the desktop client carries the sim-side preparation. The event-cycle is where our cross-platform unification becomes a calendar fact rather than a feature claim — the cycle’s first three weeks are sim sessions, the next two are sim-and-real-world mixed, and the final week is real-world only with the track day itself as the measure step. The sim-to-real transfer article develops the cross-platform argument the event-cycle cadence depends on: the shape from the sim trace and the shape from the real-world trace are the same shape, so the cycle’s sim work counts as preparation for the real-world target.
How LAP automates this
The framework is the same whether the driver runs a paid subscription or a paper journal. What LAP automates is the mechanical work at each verb — the shape recognition, the rule lookup, the metric tracking, the next-session plan — so the driver’s attention stays on what the data means rather than on the bookkeeping.
Diagnose. Each finalized session generates a WeaknessEpisode on the backend. The desktop client extracts shape patterns from the trace; the rule library matches them against named weakness shapes; an episode lands in the driver’s inbox with observation, evidence, and likely cause already filled in. The driver opens the app and sees what to look at first — the diagnose step that would take an hour of trace-reading happens in the time it takes the page to load.
Prescribe. Each WeaknessEpisode that has a matching rule receives an ActionPrescription: drill name, success criterion, target session count. The prescription draws from our coaching library. Every prescription traces to a written rule that a human author edited and evidence-backed by hand — never to a language-model output. The deliberate scope choice is durability over surface area: shapes the library does not yet name receive no prescription rather than a fabricated one.
Execute. Live coaching prompts surface inside the session itself. On the desktop the prompt overlays the driver’s view when the rule library detects a weakness shape recurring in real-time; the mobile capture stack emits the same prompt on real-world sessions. The driver corrects on the next lap rather than waiting for the post-session debrief, and the gap between diagnose and execute shrinks toward zero on the most repetitive shapes.
Measure. Each drill execution writes an ActionOutcome record — whether the success criterion landed, how many attempts, under what conditions. The DriverProgressSnapshot aggregates across sessions and across surfaces (sim and real-world together) so the metric appears on a single progression view. The driver does not have to spreadsheet anything to know whether a prescription is working.
Adapt. The backend reads recent ActionOutcomes and emits the next ActionPrescription. The loop closes automatically: the adapt step’s three branches (advance, repeat, re- diagnose) become a queued next-session plan rather than a weekly meeting with yourself. Manual override is always available; if the driver disagrees with the prescribed next step, they can pick the next focus by hand.
The LAP product surface ships these capabilities as a single subscription — see our pricing for what is included. The point of the automation is that the framework runs without the bookkeeping; the driver still has to execute, and the rule library still has gaps that §6 addresses with the DIY template approach.
A DIY template for non-LAP users
The framework is the article’s gift to the reader, not LAP’s proprietary asset. The five verbs run on any telemetry tool that exports brake-pressure, steering, throttle, and lap-reference traces — Garage61, MoTeC, RaceLab, Track Attack, STINT, or even a spreadsheet plus a hand-written journal. The template below is the worksheet structure that turns sustained practice into a closed loop.
The template has one row per framework verb.
- Diagnose worksheet (one row per week): date, session IDs reviewed, observed weakness shape (one sentence), and trace evidence — a screenshot, a file path, a corner reference. The weakness shape is what you write down; the rest is what supports it.
- Prescribe line (one sentence): drill name, success criterion that is observable and quantifiable, target session count. If the prescription does not fit in one sentence, it is not yet specific enough — keep editing until it does.
- Execute log (per session): which session was the drill attempt, what the conditions were (track, weather, setup, warm-up state). The execute step is a session reference, not a narrative of how you felt.
- Measure cell: yes / partial / no. Three options is enough — finer gradations invite rationalization.
- Adapt decision (one sentence): advance to the next prescription, repeat the same prescription, or re-diagnose because the weakness shape was misread.
The template runs in any tool you already pay for. If you have not chosen one yet, the telemetry app comparison walks through six options on six dimensions and maps reader personas to shortlists. Garage61’s free tier is the lowest-friction starting point for sim-only readers; MoTeC i2 Pro is the most powerful free tier and the steepest learning curve; Track Attack handles real-world phone-based capture without a hardware logger. The template structure is identical whichever tool fills the rows. The cycle that closes the loop on your own driving is the same cycle whether the data comes from an automated pipeline or from a screenshot you saved at the end of last Tuesday’s session.
Conclusion
The development plan IS the loop. Diagnose, prescribe, execute, measure, adapt — the framework is portable across tools, surfaces, and skill levels. What makes it work is not the verbs themselves but the discipline of running them in order, on a cadence, against data.
The article walked through what a plan is, the five-verb framework, three cadences (per-session, weekly, event- cycle), how LAP automates each step, and a DIY template that runs on any telemetry tool.
For the LAP five-pillar surface, see /product. The three technique articles in this cluster develop the adjacent questions: /blog/sim-to-real-transfer on the cross-platform argument, /blog/what-is-trail-braking on the rule-library depth, and /blog/best-sim-racing-telemetry-app on choosing a tool. For early access to LAP and the six-month launch roadmap, see /waitlist.