A Framework by Secucloud

The SAISF
Framework.

A practitioner's model for governing, securing, and assuring the use of artificial intelligence inside the modern enterprise — synthesising NIST AI RMF, ISO/IEC 42001, the EU AI Act, and the OWASP LLM Top 10 into one coherent operating system.

Six Domains Five Maturity Levels Four Standards

Most organisations are not failing at AI security because the controls don't exist. They are failing because nobody has stitched them together into something a board, an engineer, and an auditor can all read on the same page.

SAISF is that page. It assumes you already have a cloud security programme, an information security management system, and a growing appetite to deploy AI — and gives you the missing connective tissue between them.

It is opinionated by design. Each domain prescribes the questions to ask, the controls to expect, and the maturity signals to look for. It is not a replacement for the underlying standards; it is a practitioner's lens on top of them.

A complete operating model
for enterprise AI security.

01
Govern

Authority & Accountability

Establish who decides, who approves, and who is answerable when AI behaves unexpectedly. Without this layer, every other domain is theatre.

  • AI risk appetite & policy framework
  • Approval workflow for new AI use cases
  • Roles: AI Risk Owner, Model Owner, Data Steward
  • Regulatory horizon scanning & legal mapping
  • Board reporting & KRI definitions
02
Discover

Visibility & Inventory

You cannot secure what you cannot see. Most organisations underestimate their AI footprint by a factor of five — sanctioned tools are the tip; shadow Copilots, browser extensions, and embedded vendor features are the iceberg.

  • Sanctioned & shadow AI tool inventory
  • Data flow mapping into and out of models
  • Third-party AI & model supply chain register
  • Use-case classification by risk tier
  • Continuous discovery via CASB & SaaS audit logs
03
Protect Data

Boundaries & Provenance

The defining security question of the AI era: where does sensitive data go when an employee, an agent, or a retrieval pipeline touches a model — and can you prove it didn't leak, leak back, or leak across tenants?

  • Data classification adapted for prompts & embeddings
  • RAG pipeline access control & chunk-level authorisation
  • Training & fine-tuning data provenance
  • Cross-tenant isolation in vector stores
  • Output handling, redaction, & DLP
04
Secure

Architecture & Identity

Bring AI workloads under the same architectural rigour as the rest of your cloud estate — without pretending an LLM is just another microservice. Identity, secrets, network egress, and model supply chain need AI-aware controls.

  • Identity model for human, service, & agent principals
  • Prompt-injection-aware tool & function-calling design
  • Network egress controls for inference endpoints
  • Secrets, API keys, & rotation for model providers
  • Model registry & signed artefact controls
05
Detect & Respond

Telemetry & Reaction

AI introduces incident classes your SOC has never seen — silent prompt injection, model exfiltration, deepfake-driven fraud. New playbooks, new telemetry, new tabletop muscle memory.

  • Prompt & response logging strategy
  • Anomaly detection for agent behaviour & tool calls
  • AI-specific incident response playbooks
  • Deepfake & voice-clone reaction protocols
  • Tabletop exercise programme for executives
06
Assure

Testing & Evidence

Treat models like critical software. Red-team them, version them, monitor them in production, and produce evidence that an auditor — or a regulator — can read without translation.

  • LLM red-teaming & prompt injection testing
  • Bias, robustness, & safety evaluations
  • Continuous compliance evidence collection
  • Model card & system card discipline
  • Audit-ready control narrative library

Five levels
per domain.

Every domain is assessed on the same five-step ladder. Most enterprises today sit between Level 1 and Level 2 across most domains — and don't realise it.

01
Ad hoc
Activity exists, but is reactive, undocumented, and dependent on individuals. No consistent outcome.
02
Defined
Policies, owners, and procedures exist on paper. Adoption is partial; gaps are known but not systematically closed.
03
Managed
Controls operate consistently across the estate. Coverage is verifiable. Exceptions are tracked and time-bound.
04
Measured
Outcomes are quantified. Metrics drive investment decisions. Control effectiveness is independently testable.
05
Optimised
The programme adapts continuously to new model classes, new threats, and new regulation. Evidence is the default output.
Use the Framework

The framework is the method. The engagements are how we apply it.