How it works
One place to see your AI stack, ensure it meets real standards, and make every build smarter.
Plug into the tools your teams already use
Connect identity and your build stack. Experiments, integrations, and shadow usage surface automatically. No spreadsheets, no chasing links.
See exactly what exists, and how it scores
Seven trust dimensions, one Liveform Rating. Red, Amber, or Green: a precise answer to whether it's safe to share, deploy, or demonstrate.
Fix it now and ship smarter next time
Structured fix guidance for every finding: prompts and Markdown with exact replacement values. A .liveform baseline carries what you learned into the next build.
The problem
Nobody has a complete picture of what exists.
AI experiments, integrations, and shadow usage sprawl across teams, tools, and owners. Nobody sees the full picture - what exists, what's duplicated, or what risk each build carries across access control, compliance, brand alignment, IP leakage, inheritance, permission scope, and production viability.
Liveform gives organisations one place to see everything, ensure every build meets their standards, and make every next build smarter than the last."
The .liveform output
The file that travels with the work.
From your baseline, Liveform generates a structured .liveform config file. It does not constrain the builder. It informs the AI tool. From the first keystroke, the tool already knows what colours to use, what data it cannot touch, what compliance framework applies.
Every next build inherits what you learned. Governance that compounds.
# Acme · liveform.config
organisation:
name: "Acme"
jurisdiction: IE-EU
data_controller_id: "IE-1234567X"
brand:
primary: "#0D0D14"
secondary: "#1A1919"
accent: "#003d3b"
font_primary: "Inter"
font_mono: "IBM Plex Mono"
compliance:
frameworks: [GDPR, NIS2]
auth_required: true
consent_required: [analytics, third_party_sharing]
retention_days: 90
data:
prohibited:
- customer_pii
- live_production_db
- internal_api_keys
- unreleased_product_data
The Full Picture
The OS for AI.
Surface, govern, and improve every AI experiment, integration, and deployment in one place. Discover what exists, score it in seven trust dimensions, get exact fix guidance, and carry a .liveform baseline to every build.
| Integration | Frequency of use | Exposure | Remediation | |
|---|---|---|---|---|
|
OpenAI
|
312k calls/wk · 214 active seats | AMBER | Map retention to liaison memo; enforce MFA group on 2 workspaces. | → |
|
Microsoft Copilot
|
198k calls/wk · org-wide | GREEN | None required · continue DLP phase-2 rollout. | → |
|
Anthropic
|
84k calls/wk · 4 projects | AMBER | Narrow one project key scope to documented data classes. | → |
|
Vercel
|
156k deploy events/wk | RED | Require auth on preview tier for PII-class repos; align to external-demo standard. | → |
|
Claude.ai
|
26k sessions/wk | GREEN | Monitor personal accounts outside Team (shadow list). | → |
|
Internal LLM gateway
|
62k calls/wk | AMBER | Complete model card for fine-tuned route per ML governance upload. | → |
Try it live
Experience Liveform for yourself.
Explore a real enterprise workspace: filter the workspace, open any build, and see how governance scores sit next to exact fix guidance.
Join the beta
Request early access
We're onboarding a limited number of enterprise organisations in private beta. Craftform Bearing or Anchor clients receive Liveform as part of their engagement.
By requesting access you agree to our privacy policy. We'll only use your details to contact you about Liveform.
If signup fails (for example your network blocks our API), email hey@liveform.ai.
Six things that change when you have a complete picture of your AI usage.
Leadership gets an answer they couldn't access before.
A structured view of every AI build: scored, classified, tracked. Exportable reports give CDOs, CTOs, and boards a clear answer.
Builders stop second-guessing and start shipping.
With the .liveform baseline in their build tool, every prompt starts compliant. Builders move faster because the standards are already in place.
Compliance risk is found before anything ships.
GDPR exposure, PII leakage, unapproved data connections. All found before anything is shared. Correction early is inexpensive.
The gap between experiment and production becomes navigable.
Production viability is scored on every build. The path from experiment to deployment becomes explicit.
Institutional memory survives organisational change.
When people leave, the knowledge stays. Every experiment, integration, score, finding, and fix is in the workspace.
The system gets smarter the more it is used.
As your workspace grows, Liveform surfaces intelligence no team could see alone: converging experiments, duplicate integrations, archived builds now viable.
Frequently asked questions
What is Liveform?
It discovers experiments, integrations, agents, and shadow AI usage from the tools your teams already use.
How does Liveform find AI work in our organisation?
You connect identity and your build stack so experiments, integrations, and shadow usage surface automatically.
What does a Liveform Rating cover?
Ratings assess dimensions including access control, compliance exposure, brand consistency, intellectual property risk, inheritance of standards, scope of capability, and production viability, so leadership gets a consistent picture across every build.
Is Liveform generally available?
Liveform is in a private enterprise beta with a limited number of organisations. You can request early access on this page; Craftform Bearing or Anchor clients may receive Liveform as part of their engagement.
Who is behind Liveform?
Liveform is built by Craftform, a boutique design and AI consultancy that works with senior leaders closing the gap between AI ambition and production reality.