Last reviewed: 2026-05-11. Assessment version 0.7.3.
This page documents the principal computations behind shotdiagnostic.com: the items and aggregation used by the SSDI self-assessment, the geometry of the calibrated target PDFs, and the components and weights of the atmospherics shootability score. Item content, scoring rules, target geometries, and rubric weights evolve as user feedback and feature requests come in; live weather data is the only thing here on a fixed refresh cadence (15-minute server cache). The math here is the same math the public site and the embeddable widgets run.
The Shooting Skills Diagnostic Inventory comprises a set of items spanning five disciplines (precision rifle, practical rifle, pistol, clay shooting, archery) plus a small cross-cutting block (self-diagnosis, practice structure, competition transfer). A taker selects one or more disciplines and is shown the items relevant to those selections, so per-session length varies. The precision rifle and practical rifle pools share a subset of items reflecting their common fundamentals; shared items are counted once toward the total.
Items use one of five measurement approaches: performance estimate, behavioral frequency, anchored self-report, situational judgment, and domain reasoning. Each item presents item-specific response anchors rather than generic agreement statements. For example, a precision rifle item asking about group size offers anchored options at distinct MOA values, and an item about wind reading presents a scenario with response options that vary in technical reasoning quality. Mixing measurement approaches is intentional: it reduces response-style biases common to single-format self-report instruments. All items score on a 1–5 metric.
Items are pre-assigned to skill domains within each discipline. For precision rifle, the domains are Platform & Execution, Environmental Mastery, Systems & Process, and Mental Endurance; other disciplines have their own domain structures. Most domain scores are the arithmetic mean of their constituent items, rounded to one decimal. A small number of domains use other aggregations where the structure of the items warrants it: single-item domains report the raw item score, score-benchmark domains average only the sub-disciplines the taker selected, the practice-structure domain is a count of deliberate practice behaviors mapped to a 1–5 score, and the competition-readiness pair is reported as two independent indicators rather than a composite. Three fixed outcome bands are then applied to each domain score: under 2.5 is reported as a development area, 2.5 to 3.5 as competent, and above 3.5 as a strength.
Scoring runs server-side. The client receives only the computed scores, the per-domain band, and the diagnostic flags; the threshold values and flag conditions are not exposed to the browser. Sessions are anonymous and identified by a random session code. No account, email, or personal information is collected. Anonymous response data is retained in aggregate to inform future versions of the item set.
A note on the word "diagnostic": it refers to the format of the output (patterns of strengths and development areas surfaced from the response data), not to clinical or formal diagnostic claims.
Diagnostic flags are deterministic rules over individual item responses and per-domain means. The current flag set covers patterns across knowledge-execution gaps, practice structure, self-diagnosis calibration, and discipline transfer. A "knows theory, can't execute" pattern fires when a taker scores high on a conceptual item (e.g., wind reading in precision rifle, lead physics in clay shooting, malfunction diagnosis in pistol) but low on the corresponding execution item, flagging conceptual understanding that hasn't translated to performance. A "no practice structure, high skill" pattern fires when the practice-structure score is low while at least one domain score is high, flagging a shooter whose current skill is real but whose development ceiling will be reached without deliberate practice structure. A "self-diagnosis confidence mismatch" pattern fires when a taker rates their self-diagnosis ability highly but scores low on the technical knowledge domain that self-diagnosis relies on: the classic miscalibration pattern. Flags are surfaced on the results page alongside the radar and bar charts, so the underlying response pattern is legible rather than buried. Specific item thresholds and flag conditions are not published to keep the response data from being gamed.
Every calibrated target on shotdiagnostic.com/targets is generated server-side as a vector PDF using true angular values, not rounded approximations. One MOA equals 1.047 inches at 100 yards (more precisely, the tangent of 1/60 of a degree). One MIL equals 1/1000 of the slant distance. Every grid line, scoring ring, and circle is computed for the operator's exact caliber and distance, with bullet-diameter correction applied so the printed value reflects center-to-center group size rather than edge-to-edge spread.
The current target catalogue covers diagnostic grids for measuring group geometry, scoring-ring targets that bake the bullet-diameter correction into the printed ring values, multi-aim-point targets for load development, progressively scaled circle rows for finding a current precision threshold, scope-tracking targets for verifying turret travel against exact click intervals, and head-to-head matched-pair targets for direct comparison.
Every PDF includes a 1-inch verification bar so the operator can confirm the printer rendered the page at 100% scale, plus a verification hash stamp that encodes the calibration parameters used. The hash is reproducible: regenerating the same target with the same parameters produces the same hash, so a coach or match director can confirm a printed target matches the spec it claims.
The 0-100 shootability score on every range page and every atmospherics embed is a weighted geometric mean of five sub-scores: sustained wind (28%), precipitation rate and condition (30%), temperature with wind chill or humidex (18%), gust spread above sustained wind (12%), and visibility (12%). Precipitation is weighted slightly above sustained wind because for most outdoor shooting and archery practice rain is a binary go/no-go gate where wind degrades a session and rain ends it. A weakest-link rule caps the headline at MARGINAL when any single component is catastrophic, and a severe-weather override forces the headline to POOR for lightning, hail, freezing rain, blizzard, or violent shower observations regardless of the other inputs.
Outcome bands are fixed thresholds: 90 and above is OPTIMAL, 75-89 is GOOD, 50-74 is OK, 30-49 is MARGINAL, below 30 is POOR. The rubric is tuned for centerfire rifle past 300 yards and target archery; pistol, rimfire, and shotgun shooters can read the score one band more lenient. Density altitude is computed from station pressure (not sea-level reduced pressure) and ambient temperature using the ICAO precise barometric formula. Dewpoint uses the Magnus-Tetens approximation. The 24-hour forecast strip runs the same 5-component engine on every forward hour. The TOMORROW summary identifies the single best hour (PEAK) and the longest unbroken stretch of daylight hours scoring GOOD or better (BEST), falling back to OK if no GOOD run exists.
Open-Meteo applies WMO codes 95 (thunderstorm without hail), 96 (thunderstorm with slight hail), and 99 (thunderstorm with heavy hail) to forecast hours as forecast-risk codes — they mean "thunderstorm conditions possible in this grid cell during this period," not "thundering on you right now." On unstable spring afternoons these codes appear in many forecast hours with zero precipitation and single-digit precipitation probability. The severe-weather cap suppresses itself for codes 95/96/99 unless the same hour also reports either non-trace precipitation or a precipitation probability of 50% or higher; without that confirmation the score is computed without the override.
The STORM RISK chip requires three concurrent signals in the same afternoon hour (12:00-20:00 local): convective available potential energy at or above 1500 J/kg, lifted index at or below -3 K, and a model precipitation probability at or above 30%. CAPE and lifted index alone measure instability; the precipitation probability adds the trigger and shear ingredients the model already integrates. The 30% threshold is a commonly-used threshold for non-trivial convective trigger probability.
Live atmospherics come from Open-Meteo (open-meteo.com), licensed under CC BY 4.0. Per-range payloads are cached server-side for 15 minutes with single-flight coalescing and stale-while-revalidate, so a range page that gets ten visitors in the same minute makes one upstream call. Embed responses share the same cache and add a 15-minute stale-while-revalidate window at the CDN. Range coordinates and metadata come from a curated seed list of recognised outdoor shooting and archery ranges; submissions go to hello@shotdiagnostic.com. The complete machine-readable list of per-range URLs is at https://shotdiagnostic.com/sitemap-ranges.xml.
The SSDI is experimental. The items, the domain groupings, and the band thresholds are working drafts and have not been put through a formal validation study. Whether a real validation is even possible depends on things this site does not currently have: enough responses per discipline to be statistically meaningful, an opt-in way for respondents to link their answers to an outside reference point (match results, coach ratings, or actual on-the-line observation), and the same respondents retaking the assessment over time. The site is anonymous and does not collect the kind of personal data that would make any of that possible today.
The SSDI is a self-report assessment. It captures what a shooter reports about their own training, equipment, and performance at the moment of the session. It does not observe behaviour on the line, ingest match results, or measure physiology. Self-report data is subject to the usual limitations of the format: recall, self-presentation, and rater calibration drift. Item pools are sized for breadth of coverage across each discipline rather than for fine-grained psychometric inference at the single-domain level; domains with as few as two or three items produce coarser estimates than domains with more items.
The atmospherics shootability score is a planning aid composed of five hourly atmospheric components. It does not model individual shooter skill, equipment ballistics, terrain wind effects below the model's grid resolution, range-specific obstructions, microclimates, or anything safety-critical. Severe-weather decisions belong to the on-site shooter and the range's safety officer, not to a number on a web page. The 24-hour forecast strip inherits the underlying weather model's forecast horizon and resolution.
Calibrated targets are exact to their printed calibration only when printed at 100% scale on a printer that respects PDF page dimensions. The 1-inch verification bar on every PDF is the primary way to confirm the printer rendered the page at the intended scale. Group-size measurement assumes round bullet holes; tumbled, keyholed, or torn impacts will measure inconsistently regardless of the underlying target geometry.