Content validation (planned)
Content validation is a planned monitoring.app capability for catching “200 OK but broken” failures — for example missing keywords, changed JSON responses, or broken metadata. It is not part of the current Step 1 product.
Why it matters
Many failures do not show up as downtime. They show up as a broken user flow — and you only hear about it when a client says “something feels off.”
- • A deploy ships an error banner but still returns HTTP 200.
- • A CMS edit removes the “Request demo” button.
- • An API returns valid JSON… but the field your app needs is gone.
- • A page is accidentally set to noindex, killing SEO quietly.
What’s planned
The goal is to add response-body, JSON, and metadata checks so teams can detect correctness regressions, not just downtime.
Keyword checks that catch error pages, missing UI text, and broken states.
Validate that fields exist and equal the expected value.
Catch accidental noindex, wrong title, or missing OpenGraph tags.
Validation should fail the check even when the server still responds.
Planned examples
These examples show planned validation rules, not a live feature in the current product.
Planned example: confirm critical UI text is present.
checks:
- name: Login page keyword
url: https://app.example.com/login
expect:
status: 200
contains: "Sign in"Planned example: fail the check when an error banner or maintenance message appears.
checks:
- name: No error banner
url: https://app.example.com/
expect:
status: 200
not_contains:
- "Something went wrong"
- "Internal Server Error"Planned example: validate API correctness when a response is reachable but wrong.
checks:
- name: API health
url: https://api.example.com/health
expect:
status: 200
json:
- path: "$.status"
equals: "ok"
- path: "$.build.commit"
exists: truePlanned example: catch accidental noindex, wrong titles, or missing metadata.
checks:
- name: Homepage metadata
url: https://example.com/
expect:
status: 200
title_contains: "Example"
meta:
- name: "robots"
not_contains: "noindex"Who it could help
Potentially prove what broke, when it broke, and reduce ‘it seems down?’ client messages.
Potentially catch regressions right after deploys — before support tickets spike.
Potentially validate that key conversion paths still work beyond basic uptime.
Tell us what you’d want to validate and we’ll use that to shape rollout priority.