A global consensus is forming around how AI should be built and governed — especially AI used in high-stakes domains like law enforcement, justice, and public safety.
Endorsed by 47 countries including the US, UK, Canada, Japan, and every EU member state. Five core principles: transparency, accountability, robustness, fairness, and human oversight. The OECD's definition of an AI system is now used by the EU, the Council of Europe, the United States, and the United Nations in their own frameworks.
Developed by the US National Institute of Standards and Technology in collaboration with the private and public sectors. Voluntary but globally influential, it defines seven characteristics of trustworthy AI and provides a practical four-step framework — Govern, Map, Measure, Manage — that organisations worldwide are adopting as a de facto standard.
The first legally binding international treaty on AI, signed by over 50 countries including the United States, the United Kingdom, Canada, Israel, and the European Union. It requires signatories to ensure AI systems respect human rights, democratic values, and the rule of law.
The world's most detailed enforceable AI regulation. It classifies AI used in law enforcement investigations as high-risk and imposes specific requirements around transparency, logging, human oversight, risk management, and technical robustness. High-risk enforcement begins December 2027.
The key ideas: AI that touches people's rights needs to be transparent, traceable, robust, and subject to human oversight.
When an investigator uses AI to surface connections across thousands of documents, summarise a complex case file, or flag an anomaly in financial records — the stakes are real. A wrong answer can mislead an investigation. An opaque system can erode trust in the institutions that use it.
Transparent, explainable, auditable AI is what lets your organisation deploy AI in higher-stakes scenarios with confidence. It's what lets an analyst either trust an AI-generated lead or double-check its output according to the stakes at hand.
Trust is the prerequisite for autonomy. The more trustworthy the AI, the more your investigators can do with it.
Every AI-generated response is clearly labelled. Investigators always know when they're looking at AI-assisted analysis versus source data. Free-text AI features carry visible disclaimers. There's never confusion about what was written by a human and what was generated by a machine.
AI answers link back to the original documents, passages, and evidence they were derived from. When a precise backlink isn't possible, the system identifies which documents it drew from and invites the analyst to verify. No black boxes, no unexplained conclusions.
Every AI interaction is automatically recorded: what was asked, which documents were retrieved, what the AI responded, and who ran the query. Logs are tamper-resistant, searchable, and retained for the periods your organisation requires.
AI in Octostar recommends — it doesn't decide. AI-generated flags and alerts are visually distinguished from rule-based or manual ones, and always show what triggered them. Investigators can override, dismiss, or escalate any AI output based on their own judgment.
The platform defends against prompt injection and adversarial inputs in ingested documents. If the AI service becomes unavailable, Octostar continues to operate — search, document analysis, and link charts work independently. AI features degrade gracefully, never catastrophically.
A formal risk register tracks known risks, foreseeable misuse scenarios, and mitigation measures for every AI feature. This is a living process — informed by real-world deployment feedback and updated throughout the product lifecycle.
A dedicated channel connects your organisation to our product and engineering teams for AI-related issues. We monitor patterns across deployments and maintain formal escalation procedures for serious incidents.
We provide hands-on training tailored to your organisation's specific scenarios, workflows, and regulatory context. Real examples, practical exercises, and guidance your teams can apply immediately — so compliance becomes operational knowledge, not just policy.
Trustworthy AI isn't just about software — it's about the people who use it. Every framework, from NIST to the EU AI Act, emphasises that operators of high-stakes AI systems need the competence, training, and authority to oversee the technology effectively. As your provider, it's our job to make that possible.
Every Octostar deployment includes a dedicated training programme for investigators and analysts, covering how the AI works, practical rules for AI-assisted investigation, hands-on scenarios with real examples, operational obligations under applicable regulation, and how to report issues and when to escalate. Training is delivered in your language, tailored to your operational context, and includes attestation for compliance records.
Every deployment ships with comprehensive documentation covering system capabilities, known limitations, accuracy characteristics, human oversight procedures, and logging mechanisms.
Annual refresher training aligned to product updates. A single point of contact for AI-related issues. Collaborative review of your deployment's risk profile as your use of the platform evolves.
Octostar is headquartered in Ireland with R&D in Italy. The platform is designed for on-premise, air-gapped deployment — your investigative data never touches the public internet, never leaves your infrastructure, and is never used to train AI models.
This isn't just good security practice. It's a compliance advantage. When your AI runs on your hardware, inside your network, under your control — the data governance, logging, and oversight requirements become significantly easier to meet, regardless of which jurisdiction's rules apply.
AI transparency labels, source traceability, audit logging, human oversight controls, investigator training programme, and risk management processes are built into the product and actively refined with every release.
Full compliance with all applicable high-risk requirements under the EU AI Act, including technical documentation, conformity assessment, and registration. One year ahead of the regulatory deadline — and aligned with NIST, OECD, and Council of Europe standards.
EU AI Act enforcement date for high-risk systems. Octostar customers will already be operating on a fully compliant platform.
The OECD defined the principles. NIST turned them into engineering practice. The Council of Europe made them law. The EU AI Act set the most detailed requirements. Octostar builds to all of them — because investigators deserve AI they can trust, and the people subject to investigations deserve AI that's accountable.
Octostar's compliance programme is based on our interpretation of applicable frameworks as of April 2026, including Regulation (EU) 2024/1689, the NIST AI Risk Management Framework (AI RMF 1.0), the OECD Recommendation on AI, and the Council of Europe Framework Convention on AI. This page does not constitute legal advice. Regulatory frameworks are subject to ongoing implementation guidance from competent authorities.