First customers

Cargill Pilot

Two dairy nutrition consultants. Two weeks. Real evidence briefs. This page tracks where we are.

Who and what

ANH Digital Platform is shipping Maestro to Cargill's animal nutrition research group. The first cohort is two consultants who today spend three to five days assembling evidence briefs by hand — pulling from PubMed, the Journal of Dairy Science, internal Cargill feed-trial data, and regional regulatory references. Maestro's job is to collapse that to hours of review on top of continuously-maintained drafts.

Why this work, why this customer

Three reasons we picked dairy evidence briefs as the first workload:

  • Continuous data. New trials publish weekly. The chatbot-shaped stack handles one brief; Maestro handles a brief that keeps updating itself.
  • Structured outputs. An evidence brief has a known shape, which gives us a clear success predicate and a clean evaluation surface.
  • High stakes, low ambiguity. The work is consequential (feed additives in production herds) but the truth is checkable against trial data. The output is auditable in a way pure-narrative work is not.

Milestones

Phase 1 spine on Azure in progress
Phase 1 spine on AWS in progress
Cargill tenant defined queued
Two consultants identified queued
First evidence brief shipped queued
Two-week pilot complete queued

Metrics we will report

MetricCurrentTarget
Time to briefbaseline TBDtarget: 50% reduction
Cost per briefbaseline TBDtarget: < $20
Researcher NPStarget: ≥ 40
Defect rate per brieftarget: < 2

What we are deliberately not promising

  • No replacement of human judgment. Maestro drafts and updates; the consultant approves. The pilot success criterion is researcher productivity, not autonomy.
  • No general-purpose chatbot. Goal frames are structured. The notebook is the interface. We are intentionally not building a conversational front-end.
  • No multi-tenant data leakage. Cargill's corpus stays in Cargill's tenant. Source authority and field manifests are scoped at the store layer.