The library
A catalog of weakness shapes.
The library is a versioned set of rules — small, named, written down. Each rule is a
weakness pattern with an explanation, a prescription, and a measurable success condition.
They are indexed by three axes: the
track section
the pattern shows up in, the
car class
it applies to, and the
weakness shape
itself — late-on-the-brakes, off-throttle-too-early, oversteer-on-exit. The rule that fires
on you depends on which combination matched your session, not on a one-size-fits-all tip sheet.
The library is rule-based, not a chat box. A weakness has to match a written-down
detection condition before it surfaces, and a prescription has to cite the rule it came
from. That framing matters because the alternative — a model that talks to you — is
something we have ruled out for the first phase of LAP for a reason: an opaque coach you
cannot audit is not a coach you can trust.
Rule → prescription
From a detected weakness to a drill you can do tomorrow.
When a session lands on the platform, the detector walks the library and records every
rule whose condition matched. Each match becomes a
weakness episode — when it happened, where on
the lap, how severe, how confident. The rule then produces a
prescription: a small piece of work with a
plain-language rationale, a drill to do next session, and a success condition the platform
will check against your future laps.
A prescription is not a vibe. It has a target metric, a target value, and a measurement
window — three more sessions, seven more days, the next race weekend. You can see exactly
which rule fired it, what evidence triggered the rule, and what would count as the
prescription having worked.
Three samples from the library
What a rule looks like in practice.
Three rules from the library, one in each pedal axis — one braking, one throttle, one
cornering. Each shows the same shape: which weakness it catches, what the prescription
asks of you, and the measurable success condition the platform will check against your
next sessions.
Braking · Spa · La Source
The trail-brake release that costs five km/h at the apex of La Source.
Rule br.late_braking.apex_speed_loss
· severity medium · category technique
The detector flagged your apex speed at La Source running about
5 km/h under your reference for the same car
and track combination — three laps in this session, twice more in the previous two weeks.
The signal trace shows the trail-brake release falling off too sharply mid-corner: pressure
drops to zero before the car has rotated, the front loses grip, and the apex line opens up.
The prescription asks for a softer release: 5–10% pedal pressure carried past turn-in,
eased off as you pick up the throttle. The drill is one practice stint isolating the
corner, focused on the pedal trace shape rather than the lap time. Success condition:
apex speed at La Source ≥
96% of your reference, measured over the next
3 sessions on the same car class.
Throttle · Brands Hatch · Druids
The early throttle pickup that scrubs exit speed at Druids.
Rule th.early_throttle_pickup.exit_traction_loss
· severity medium · category technique
The detector flagged the exit of Druids: throttle going to full while the steering input
is still above the traction threshold for your car class, recorded twice in the session
with brief lift-and-recover events both times. On a rear-driven GT3 that small misorder
costs you the run all the way down to Graham Hill bend, because the exit slide bleeds
about a tenth of a second of speed onto the next straight.
The prescription asks you to delay full throttle until the steering has unwound past
roughly half-lock, picking up progressively rather than punching the pedal off the apex.
The drill is three exits of Druids in isolation, watching the steering-vs-throttle
overlay rather than the speed trace. Success condition: zero traction-circle violations
on Druids exit, measured over the next 3 sessions.
Cornering · Silverstone · Copse
The corner-entry oversteer at Copse you can’t see in the mirrors.
Rule en.entry_oversteer.steering_overshoot
· severity high · category platform-balance
The detector flagged corner-entry yaw oscillation at Copse: more than two corrective
steering inputs while still on the brake, four laps in a row. The car rotates faster
than your hands settle, you catch it on the wheel, and the line scrubs out toward
track-out. From the cockpit it feels like the car “just is” loose; the trace
shows the trail-brake holding load on the front longer than the platform can stabilise.
The prescription asks for an earlier brake release into Copse, gentler steering input on
turn-in, and weight-transfer managed with the pedal rather than the wheel. The drill
targets ten clean Copse entries with the steering-correction count visible live. Success
condition: ≤ 1
mid-corner steering correction at Copse, measured over the next
3 sessions.
Outcomes
And then we measure whether it worked.
When the measurement window closes, the platform records an
outcome: did the success condition hold,
how much did the metric move, what fraction of the gap to your reference closed. Outcomes
are how a prescription stops being a suggestion and starts being a fact. Either it worked
for you or it did not, on numbers you can read.
Outcomes also feed back into the library itself. Rules whose prescriptions consistently
fail to move the needle for drivers in a given car-and-track context get retired or
re-tuned; rules whose prescriptions consistently work get reinforced. The library improves
on the back of measurement, not on the back of a model retrain.
Auditable, not opaque
Rules you can read.
Each rule is a small, named, versioned object — readable by us, readable by you when the
platform shows you why a prescription fired. That is the deliberate trade. We are giving
up the surface area of an open-ended chat in exchange for a coaching surface where every
recommendation has a clear chain of custody: this rule, on this evidence, with this
expected gain, measured against this success condition.
The pieces that make a session-by-session coach work — pattern matching, prescription,
outcome measurement — are not the pieces that need a model that talks. They need a library
that is written down. That is the bet of the first phase of LAP, and the coaching library
is the artefact that proves it.