Docs
SURFACE 01 // READINESS

One score. Full confidence.

Readiness answers the question every AI team asks before shipping: Can I ship this?

Get your AI assessment
WHAT READINESS IS
  • A defensible, reviewable position based on known failure surfaces
  • Evidence you can show stakeholders, auditors, and leadership
  • A systematic enumeration of assumptions and unknowns
  • Confidence ranges that let you make informed decisions
WHAT READINESS IS NOT
  • A guarantee that your AI won't fail
  • A certification or regulatory sign-off
  • A binary pass/fail badge
  • A replacement for human judgment on shipping decisions

We give you the evidence and analysis to make the shipping call yourself, and defend it to anyone who asks.

SECTION 01
CONFIDENCE SCORE

Ship Confidence Score

A single number that aggregates pass/fail across the ways your AI can break. It tells you whether your system is ready for production, or what's holding it back.

  • Directional, not absolute. Confidence ranges, not guarantees
  • Updates on every PR. See how changes affect readiness
  • Drill down into any failure. Understand exactly what's broken
SHIP CONFIDENCE
87
out of 100
8 / 10
Questions Passing
2 Critical
Issues Found
SECTION 02
10 QUESTIONS

The 10 Readiness Questions

Organized into three buckets that help you understand why each question matters, not just what it checks.

ALERT // COMMON_FAILURE

A "bad but common" failure

The model passes all functional tests but fails under language switching. A user asks in English, gets a response, then asks the same question in Spanish and gets a contradictory answer.

This is why naive test suites fail. They test happy paths, not real-world variance.

Behavior

/Does it do the right thing?

These questions assess whether your AI produces correct, intended outcomes.

01

Intent

Does it do the right thing?

BEHAVIOR

Validates that the AI produces the intended outcome for the given input. Not just technically correct, actually useful.

Example Failure

User asks for a refund status, AI responds with shipping info instead.

02

Grounding

Is it truthful & grounded?

BEHAVIOR

Checks whether responses are based on real data and context, not invented information.

Example Failure

AI confidently cites a policy that doesn't exist in your knowledge base.

03

Hallucination

Did it hallucinate?

BEHAVIOR

Detects when the AI fabricates facts, numbers, or entities that have no basis in reality.

Example Failure

AI invents a product SKU or makes up a customer's order history.

04

Rules

Did it follow our rules?

BEHAVIOR

Verifies compliance with your operator-defined constraints and business logic.

Example Failure

AI offers a 50% discount when max allowed is 10%.

Resilience

/What happens when it's stressed?

These questions assess how your AI handles edge cases and adversarial inputs.

05

Consistency

Is it consistent?

RESILIENCE

Checks whether the AI gives the same answer to semantically identical questions.

Example Failure

Same question phrased differently yields contradictory responses.

06

Robustness

Is it robust to manipulation?

RESILIENCE

Tests resistance to prompt injection, jailbreaks, and adversarial inputs.

Example Failure

User tricks AI into ignoring system instructions via clever prompting.

07

Quality

Is it good enough?

RESILIENCE

Measures output quality against your standards for tone, format, and helpfulness.

Example Failure

Response is technically accurate but unhelpful or confusingly worded.

Containment

/How bad is failure?

These questions assess the blast radius when things go wrong.

08

Safety

Did it avoid harm?

CONTAINMENT

Ensures the AI doesn't produce harmful, dangerous, or inappropriate content.

Example Failure

AI provides medical advice it's not qualified to give.

09

Brand Safety

Is it brand-safe?

CONTAINMENT

Ensures outputs align with your brand voice and won't cause reputational damage.

Example Failure

AI uses inappropriate language or takes political stances.

10

Schema

Is the output structurally valid?

CONTAINMENT

Validates that structured outputs match expected schemas (JSON, API responses, etc.).

Example Failure

AI returns malformed JSON that breaks downstream systems.

SECTION 03
CI/CD INTEGRATION

Block bad merges automatically

Readiness scores integrate directly into your PR workflow. When a change degrades AI safety or quality, the merge is blocked with clear, actionable feedback.

  • PR comments show exactly what regressed
  • CI gates enforce minimum readiness thresholds
  • Changes that improve readiness get highlighted
PR #247 / flightline/check
Readiness check failed
Ship Confidence dropped from 87 → 71
2 questions regressed:
• Hallucination: New failures detected
• Rules: Policy violations in 3 scenarios

See your Readiness score

Find out where your AI breaks. Fix it before your users do.