Skip to main content
Private Preview· Early access by invitation. Request access →
Kirimana.
Sign in Early access
Industries

One platform.
Every regulator's question, answerable.

Kirimana is the open-source, Artificial-Intelligence-native data contract and automation platform. The capabilities are the same in every industry — owner on every contract, classification before any model sees the data, lineage from business goal to source row, audit on every Artificial Intelligence call. What changes is the regulator and the question. Below: how those capabilities meet the realities of four industries we know well.

Cross-industry

Every industry is becoming a data industry — and an AI-governed one.

The pattern is the same in manufacturing, retail, telco, energy, logistics, and SaaS: data sits in many platforms, the AI conversation is moving faster than the governance, and someone has to be able to answer the regulator (or the auditor, or the board) when they ask where a number came from. Kirimana doesn't replace your data platform. It puts a contract on top of it that travels with the data — across teams, across vendors, across years.

What's actually hard
  • A patchwork of data tools — warehouse, lakehouse, an old data lake, vendor-specific governance UI for each — and no single canonical truth about what each dataset means.
  • AI usage is accelerating, but security and compliance can't tell which model saw which classification of data.
  • Schema drift breaks dashboards on Monday morning; someone discovers it from a complaint, not a check.
  • When the company moves platforms (cloud migration, M&A, vendor change), the governance work is redone from scratch.
What Kirimana brings
One canonical contract, regardless of platform
Open Data Contract Standard (ODCS) v3 is the wire format. The same contract runs unchanged on Databricks, Microsoft Fabric, Trino, and DuckDB today, with Postgres and Snowflake on the adapter roadmap — wherever your data lives now and wherever it lives next.
Classification before AI sees the data
Every Large Language Model (LLM) call is gated by data classification. Restricted data never leaves your tenant. Every call is audit-logged: prompt, response, model, cost, caller. No direct Software Development Kit (SDK) use anywhere in the platform.
Catch breakage before production
Pull-Request-time contract linting blocks merges that break a downstream contract — schema drift, missing owner, wrong classification, undocumented Personally Identifiable Information (PII) — before the change reaches Monday morning.
Lineage from business goal to source row
Every reporting goal traces forward to the contracts and tables that produce it. When the Chief Financial Officer (CFO) asks where revenue is recomputed this quarter, the answer is in the lineage — not in someone's head.
No vendor lock-in
Apache-2.0 across every edition. Fork it, self-host it, or pay us for support. The contracts you write are yours. The platform you run them on is your choice.
In practice — Cloud migration without redoing governance

A mid-size company moves from a self-managed data warehouse to a managed lakehouse. Without Kirimana, the contracts, classifications, and audit history are tied to the old tooling — every dataset gets re-curated by hand. With Kirimana, the canonical contracts and the audit trail are platform-agnostic; the platform adapter changes, the governance does not.

Banking + financial services

When the regulator asks where a number came from, you have an answer.

Banking data isn't just sensitive — it's reportable. Basel Committee on Banking Supervision Standard 239 (BCBS 239) demands lineage and timeliness on risk data. The European Union's Digital Operational Resilience Act (DORA) demands operational evidence on every change. The European Union Artificial Intelligence Act (EU AI Act) demands classification and audit on every model. Markets in Financial Instruments Directive II (MiFID II) demands traceability on transaction reporting. None of these regulators care which cloud you're on. They care that you can produce the trail. Kirimana gives you the trail by design — not as a quarterly project.

What's actually hard
  • Risk data aggregation across trading, treasury, and retail divisions — a BCBS 239 obligation — is built on lineage that only one engineer remembers.
  • AI-assisted decisions in credit, fraud, and Anti-Money Laundering (AML) need to be explainable to model risk and to the regulator. Today they aren't.
  • Schema drift in a feeder system breaks regulatory reports two weeks later, with no warning.
  • Sensitive customer data leaves the tenant when a developer pastes it into a public AI tool.
What Kirimana brings
BCBS 239 / DORA-grade lineage
Goal-to-data lineage — regulator → report → contract → table → source — is a first-class object. Tag a contract with a regulatory goal once; the trace is queryable, exportable, and reproducible at any point in history.
EU AI Act compliance, on every prompt
Every AI call is classified, gated, and audit-logged. The audit row holds prompt-hash, response-hash, model, classification of the contract in scope, caller, token counts, and status. The same log produces the EU AI Act Article 12 record-keeping evidence and the DORA operational-resilience trail.
Restricted data stays in the tenant
Classification on every contract and every column. Restricted data — Customer Identifiable Information, Material Non-Public Information (MNPI), trade secrets — is filtered before any AI gateway call is allowed. There is no path from a classified column to a public model.
Schema-drift alarms before close-of-business
Pull-Request-time linting blocks contract-breaking changes; downstream systems are notified the moment a producing contract changes shape. Risk reports don't break on Monday — the change is caught on Friday.
Hub-and-spoke governance for divisional autonomy
Trading, treasury, retail, and corporate banking each own their domain contracts, while the central platform team owns the global classification policy and the regulatory templates. Domains move at their own speed; central holds the line.
Audit trails that survive Article 17 requests
When a customer exercises General Data Protection Regulation (GDPR) Article 17, Kirimana redacts the audit row instead of deleting it — the regulator still sees that a redaction happened, when, by whom, and under which legal basis. Erasure obligations and audit obligations both met.
In practice — BCBS 239 evidence on demand

An internal audit team is asked to produce, within five business days, the full lineage of the Tier 1 capital ratio number reported last quarter — every input, every transformation, every owner. With Kirimana, the goal-to-data lineage and the audit log together produce that evidence pack from a single command. The auditor reads it; the regulator sees the same trail.

Public sector + government

Sovereignty, transparency, accountability — without locking yourself to one vendor.

Public-sector data programmes operate under a different gravity. Procurement is multi-vendor by mandate. Data residency is sovereign — citizens' data stays in jurisdiction. Transparency obligations cut both ways: the public has a right to know, and the agency has an obligation to redact what the law says must be redacted. Budgets per agency are smaller than the headlines suggest. Open source isn't a preference; it's a procurement strategy. Kirimana fits the public-sector contract because it was designed open from the first commit.

What's actually hard
  • Vendor lock-in conflicts directly with EU sovereign-cloud and procurement-diversification policies.
  • Citizens' data must stay in jurisdiction. The platform must self-host on European Union (EU) infrastructure or on-premises — and prove it.
  • Multiple agencies want to share contract definitions (a 'person', an 'organisation', a 'case file') but each runs a different stack.
  • Transparency requests, freedom-of-information requests, GDPR Article 15 requests — and Article 17 erasure — collide unless the audit trail is built for both.
What Kirimana brings
Apache-2.0 from end to end
No license cost, no per-seat ceiling, no enterprise-tier gate. Fork it. Self-host it. Procure support from us or from a partner. Apache-2.0 across every edition, every adapter, every integration.
Self-hosted on sovereign infrastructure
Deploys on Azure Kubernetes Service (AKS) in EU regions, on Amazon Web Services (AWS) in EU regions, on on-premises Kubernetes, or on a single self-hosted node. The AI gateway is provider-agnostic by design; an air-gapped Ollama-backed Large Language Model (LLM) provider is on the near-term roadmap so fully offline tenants can run with local AI as well.
Federated contract library across agencies
A national reference frame — what a 'person', a 'case', a 'permit' means — can be published once and reused by every agency that adopts it. Each agency keeps autonomy on its own contracts; the shared semantics are versioned and traceable.
Non-technical governance interface
Domain stewards in a public-sector agency are rarely engineers. Kirimana's governance UI lets a steward review a contract, approve a classification, and trace a lineage — without writing YAML Ain't Markup Language (YAML).
Article 17 redaction preserves the trail
GDPR Article 17 erasure obligations and operational-audit obligations contradict each other if you treat erasure as a SQL DELETE. Kirimana redacts the audit row — the trace remains, the personal data does not. Both regulators are satisfied.
Owner mandatory on every contract
Public-sector accountability is statutory. Every Kirimana contract names a human owner; the AI gateway refuses to operate on a contract without one. When the public asks who decided this, there's an answer.
In practice — Multi-agency reference data without a megadeal

Three agencies in the same ministry need a shared definition of 'organisation' that traces to the national business registry. Without Kirimana, this becomes a six-figure procurement for a master-data tool. With Kirimana, one agency publishes the canonical contract to a federated library; the other two adopt it, version-pin it, and continue running their existing platforms. No vendor consolidation required.

Healthcare + life sciences

Patient data, research data, and AI assistance — under one classification gate.

Healthcare data flows in three currents that should not mix: clinical operations (a patient's record, in care), research (de-identified data under ethics approval), and population-level analytics (aggregated, public-health). The boundaries are statutory. The Health Insurance Portability and Accountability Act (HIPAA) in the United States, GDPR Article 9 special-category data in Europe, ethics-board approvals on every research dataset, and the rapidly evolving guidance on Artificial Intelligence in clinical decision support — all converge on the same operational question: can you prove which data, under which consent, fed which algorithm? Kirimana makes that question answerable.

What's actually hard
  • Electronic Health Record (EHR) data exports to a research warehouse without a clear consent boundary; a Data Protection Officer (DPO) finds out through a complaint.
  • AI-assisted diagnostic models are trained on data classifications nobody can reproduce six months later.
  • Clinical trial integrity (Good Clinical Practice / GxP) requires a chain of custody from consent form to published result; today that chain is reconstructed manually for each audit.
  • Hospital information-technology budgets won't support a per-seat governance product on top of the EHR licence.
What Kirimana brings
Classification ladder enforced on every AI call
Today every contract carries one of four classifications (public, internal, confidential, restricted), and the AI gateway refuses to send restricted data to a model that isn't approved for that tier — at the contract level, not in code review. A clinical-specific layer (Protected Health Information / consented research / anonymised aggregate) on top of the generic ladder is on the near-term roadmap.
Consent + ethics-board approvals — coming as first-class metadata
Today the General Data Protection Regulation (GDPR) Article 9 special-category flag and a 'health' Personally Identifiable Information (PII) category are structured fields on every contract. Native fields for consent type, ethics-board approval reference, and consent expiry — with apply pipelines refusing to run past expiry — are the next addition to the canonical contract spec, tracked publicly.
Audit-log every AI call
United States Food and Drug Administration (FDA) guidance on Artificial Intelligence / Machine Learning-enabled medical devices, and the EU AI Act's high-risk classification for clinical decision support, both demand traceability on every inference. Kirimana logs every call — including the contract it was acting on, the consent it relied on, and the model version.
Lineage from electronic health record to publication
EHR → research data warehouse → cohort selection → analysis → published table or figure. Goal-to-data lineage runs the full chain. When a journal asks for the analysis trail, it's an export, not a manuscript.
On-premises deployment today, air-gapped AI on the roadmap
Hospitals that can't move clinical data to public cloud can run Kirimana on-premises on Kubernetes today. The AI gateway is provider-agnostic; a local Ollama-backed Large Language Model (LLM) — keeping AI inside the hospital network — is on the near-term roadmap. Until then, on-premises deployments either operate without AI assistance or route AI calls through an approved external endpoint via the audit-logged gateway.
Owner mandatory + classification mandatory
Every contract has a named owner and a classification. The platform refuses contracts that don't. The DPO has someone to call; the auditor has the trail.
In practice — Classification-aware joins, on every apply

An analyst writes a query that joins data classified as restricted (containing identifiable clinical attributes) with data classified as confidential research output. Without Kirimana, the join runs and a Data Protection Officer is notified weeks later — if at all. With Kirimana, the AI policy gate and the contract's classification refuse the join at apply-time, the lineage shows the upstream sensitivity, and a second query is written within the boundary. Once the clinical-tier layer + consent-expiry fields ship on the spec, the same enforcement extends to ethics-approval lifecycle.

Don't see your industry?

The capabilities are universal — owner-on-every-contract, classification-before-Artificial-Intelligence, lineage from goal to source. The regulators differ; the architecture doesn't. Talk to Kiri about your specific stack and obligations, or request early access and we'll work the mapping with you directly.