AI Trust & Compliance | Octostar

Trustworthy AI for Investigations.
Built to the Global Standard.

AI that affects people's rights must be transparent, explainable, and subject to human oversight. Octostar is built on these principles — and targeting full regulatory compliance one year ahead of schedule.

AI Trust and Compliance - OECD, NIST, Council of Europe, and EU frameworks protecting people
The Global Consensus

This Isn't Just an EU Play. The World Is Converging.

A global consensus is forming around how AI should be built and governed — especially AI used in high-stakes domains like law enforcement, justice, and public safety.

2019

OECD AI Principles

Endorsed by 47 countries including the US, UK, Canada, Japan, and every EU member state. Five core principles: transparency, accountability, robustness, fairness, and human oversight. The OECD's definition of an AI system is now used by the EU, the Council of Europe, the United States, and the United Nations in their own frameworks.

2023

NIST AI Risk Management Framework (US)

Developed by the US National Institute of Standards and Technology in collaboration with the private and public sectors. Voluntary but globally influential, it defines seven characteristics of trustworthy AI and provides a practical four-step framework — Govern, Map, Measure, Manage — that organisations worldwide are adopting as a de facto standard.

2024

Council of Europe Framework Convention on AI

The first legally binding international treaty on AI, signed by over 50 countries including the United States, the United Kingdom, Canada, Israel, and the European Union. It requires signatories to ensure AI systems respect human rights, democratic values, and the rule of law.

2024

EU AI Act (Regulation 2024/1689)

The world's most detailed enforceable AI regulation. It classifies AI used in law enforcement investigations as high-risk and imposes specific requirements around transparency, logging, human oversight, risk management, and technical robustness. High-risk enforcement begins December 2027.

The key ideas: AI that touches people's rights needs to be transparent, traceable, robust, and subject to human oversight.

Why This Matters

Higher Standards Enable Higher Stakes

When an investigator uses AI to surface connections across thousands of documents, summarise a complex case file, or flag an anomaly in financial records — the stakes are real. A wrong answer can mislead an investigation. An opaque system can erode trust in the institutions that use it.

Transparent, explainable, auditable AI is what lets your organisation deploy AI in higher-stakes scenarios with confidence. It's what lets an analyst either trust an AI-generated lead or double-check its output according to the stakes at hand.

Prohibited
High-Risk
The EU AI Act defines Law Enforcement AI as High Risk, requiring the highest standard of compliance by vendors and users
Limited Risk
Minimal Risk

Trust is the prerequisite for autonomy. The more trustworthy the AI, the more your investigators can do with it.

What We Build

How Octostar Delivers Trustworthy AI

Transparent AI Outputs

Every AI-generated response is clearly labelled. Investigators always know when they're looking at AI-assisted analysis versus source data. Free-text AI features carry visible disclaimers. There's never confusion about what was written by a human and what was generated by a machine.

Aligned with: EU AI Act Art. 13 · NIST AI RMF Transparency · OECD Principle 1.3

Full Source Traceability

AI answers link back to the original documents, passages, and evidence they were derived from. When a precise backlink isn't possible, the system identifies which documents it drew from and invites the analyst to verify. No black boxes, no unexplained conclusions.

Aligned with: EU AI Act Art. 13, 14 · NIST Explainability · OECD Principle 1.3

Comprehensive Audit Logging

Every AI interaction is automatically recorded: what was asked, which documents were retrieved, what the AI responded, and who ran the query. Logs are tamper-resistant, searchable, and retained for the periods your organisation requires.

Aligned with: EU AI Act Art. 12 · NIST Accountability · OECD Principle 1.5

Human Oversight by Design

AI in Octostar recommends — it doesn't decide. AI-generated flags and alerts are visually distinguished from rule-based or manual ones, and always show what triggered them. Investigators can override, dismiss, or escalate any AI output based on their own judgment.

Aligned with: EU AI Act Art. 14 · NIST Human-AI Interaction · OECD Principle 1.2

Technical Robustness

The platform defends against prompt injection and adversarial inputs in ingested documents. If the AI service becomes unavailable, Octostar continues to operate — search, document analysis, and link charts work independently. AI features degrade gracefully, never catastrophically.

Aligned with: EU AI Act Art. 15 · NIST Security & Resilience · OECD Principle 1.4

Continuous Risk Management

A formal risk register tracks known risks, foreseeable misuse scenarios, and mitigation measures for every AI feature. This is a living process — informed by real-world deployment feedback and updated throughout the product lifecycle.

Aligned with: EU AI Act Art. 9 · NIST Govern & Manage · OECD Principle 1.5

Structured Incident Reporting

A dedicated channel connects your organisation to our product and engineering teams for AI-related issues. We monitor patterns across deployments and maintain formal escalation procedures for serious incidents.

Aligned with: EU AI Act Art. 72, 73 · OECD AI Incident Reporting Framework (2025)

Customized Compliance & Organization Training

We provide hands-on training tailored to your organisation's specific scenarios, workflows, and regulatory context. Real examples, practical exercises, and guidance your teams can apply immediately — so compliance becomes operational knowledge, not just policy.

Aligned with: EU AI Act Art. 4 · NIST AI Literacy · OECD Principle 1.2
Training & Partnership

We Work With Your Organisation

Trustworthy AI isn't just about software — it's about the people who use it. Every framework, from NIST to the EU AI Act, emphasises that operators of high-stakes AI systems need the competence, training, and authority to oversee the technology effectively. As your provider, it's our job to make that possible.

Investigator Training Programme

Every Octostar deployment includes a dedicated training programme for investigators and analysts, covering how the AI works, practical rules for AI-assisted investigation, hands-on scenarios with real examples, operational obligations under applicable regulation, and how to report issues and when to escalate. Training is delivered in your language, tailored to your operational context, and includes attestation for compliance records.

Instructions for Use

Every deployment ships with comprehensive documentation covering system capabilities, known limitations, accuracy characteristics, human oversight procedures, and logging mechanisms.

Ongoing Support

Annual refresher training aligned to product updates. A single point of contact for AI-related issues. Collaborative review of your deployment's risk profile as your use of the platform evolves.

The Five Rules of AI-Assisted Investigation

  1. 1Verify — always check AI outputs against source data
  2. 2Don't trust numbers — AI can hallucinate statistics
  3. 3Evaluate context — AI may miss nuance or cultural specifics
  4. 4Document your process — record why you accepted or rejected AI outputs
  5. 5Never delegate judgment — the investigator decides, not the AI
Sovereignty & Security

Your Data. Your Infrastructure. Your Control.

Octostar is headquartered in Ireland with R&D in Italy. The platform is designed for on-premise, air-gapped deployment — your investigative data never touches the public internet, never leaves your infrastructure, and is never used to train AI models.

This isn't just good security practice. It's a compliance advantage. When your AI runs on your hardware, inside your network, under your control — the data governance, logging, and oversight requirements become significantly easier to meet, regardless of which jurisdiction's rules apply.

Roadmap

Ahead of Schedule

Today

AI transparency labels, source traceability, audit logging, human oversight controls, investigator training programme, and risk management processes are built into the product and actively refined with every release.

End of 2026

Full compliance with all applicable high-risk requirements under the EU AI Act, including technical documentation, conformity assessment, and registration. One year ahead of the regulatory deadline — and aligned with NIST, OECD, and Council of Europe standards.

December 2027

EU AI Act enforcement date for high-risk systems. Octostar customers will already be operating on a fully compliant platform.

AI Investigations Deserve
the Highest Standard

The OECD defined the principles. NIST turned them into engineering practice. The Council of Europe made them law. The EU AI Act set the most detailed requirements. Octostar builds to all of them — because investigators deserve AI they can trust, and the people subject to investigations deserve AI that's accountable.

Octostar's compliance programme is based on our interpretation of applicable frameworks as of April 2026, including Regulation (EU) 2024/1689, the NIST AI Risk Management Framework (AI RMF 1.0), the OECD Recommendation on AI, and the Council of Europe Framework Convention on AI. This page does not constitute legal advice. Regulatory frameworks are subject to ongoing implementation guidance from competent authorities.