Kerry Farrea published at 12th Feb 2025

Building a Secure AI Future with Subrosa - A Practical Guide

If you are reading this, you may be in the process of implementing AI systems in your organization. Perhaps you are concerned about the security implications of Generative AI (GAI), or you might be looking for guidance on how to ensure your AI deployments are both secure and compliant. No matter your specific situation, implementing an AI risk management framework should be a priority for any organization working with AI technologies. GAI presents significant security risks, including data leaks and compliance violations. Studies show that 71%1 of employees use Shadow AI, while 77% of security leaders identify potential data leakage as a major concern2. Without proper controls, organizations risk breaches and reputational damage. GAI’s evolving risks make mitigation challenging, demonstrating the need for a structured AI risk management framework.

NIST AI Risk Management Framework (NIST AI 600-1)

The NIST AI Risk Management Framework offers structured guidance for incorporating trustworthiness into AI systems. It consists of four functions:

  1. Govern – Establishes policies, roles, and accountability for AI risk management.

  2. Map – Identifies AI system dependencies, contexts, and potential risks.

  3. Measure – Assesses AI system risks through evaluation metrics.

  4. Manage – Implements mitigation strategies and response measures.

Subrosa’s Alignment with the NIST AI Framework

Subrosa addresses GAI risks by securing sensitive data, enforcing governance policies, and ensuring continuous monitoring and access control. It maps to high priority controls outlined in the NIST AI 600-1 framework, supporting compliance, transparency, and security across AI deployments.

FunctionSubrosa Alignment
GovernSubrosa anonymizes sensitive data before AI processing, ensuring compliance and privacy while maintaining AI usability. Real-time monitoring detects and secures sensitive data automatically
MapSubrosa provides AI visibility by tracking and mapping interactions. Identifies accessed AI tools, processed data, and information-sharing risks.
MeasureSubrosa monitors AI interactions in real-time to detect unauthorized use and data exposure risks before they escalate.
ManageSubrosa implements granular access controls, tracks interactions at user and device levels, and ensures rapid incident response through automated workflows.

Subrosa's security controls provide coverage of Govern, Map, Measure and Manage requirements outlined in the NIST AI RMF; NIST AI 600-1 framework for generative AI. The controls work together to enable effective governance, monitoring, and protection of AI system use while addressing security risks around data protection, privacy and compliance. The mapping demonstrates alignment between Subrosa's capabilities and industry framework requirements, thereby enabling organizations to implement appropriate controls for secure and compliant AI adoption.

nist ai framework with subrosa

Governance

Subrosa FeatureFrameworkControlSummaryCSF
Data Transparency and Visibility ensures accountability in AI data usage and reduces IP risksNIST AI RMF GV-1.2-001; AI 600-1.Establishes Transparency policies for tracking data origins and historyBalances proprietary data protection with digital content integrityIdentify Protect
Establishes clear acceptable use policies and monitoringNIST AI RMF GV-1.4-002; AI 600-1 GovernanceMitigates misuse and illegal applications of GAIEstablishes transparent acceptable use policies for GAI that address illegal use or applications of GAI.Identify Protect
Enhanced AI Incident Response to Strengthen Mitigation of AI-Related Data LeaksNIST AI RMF GV-1.5-002; AI 600-1.Establishes policies for incident reviews and adaptive security measuresEstablishes organizational policies and procedures for after action reviews of GAI system incident response and incident disclosures, to identify gaps; Update incident response and incident disclosure processes as requiredRespond Recover
Prevents hidden risks from untracked AI models including shadow AINIST AI RMF GV-1.6-002; AI 600-1.Establishes exemption criteria for AI components in external softwareDefines any inventory exemptions in organizational policies for GAI systems embedded into application softwareIdentify, Protect
Reduces misuse of AI-driven interactions with outbound data policiesNIST AI RMF GV-3.2-003; AI 600-1.Enables criteria for strict policies on chatbot interactions and AI decision-making tasksDefines acceptable use policies for GAI interfaces, modalities, and human-AI configurations (i.e., for chatbots and decision-making tasks), including criteria for the kinds of queries GAI applications should refuse to respond to.Identify, Protect

Map

Subrosa FeatureFrameworkControlSummaryCSF
Test & Evaluation of Data Flows to prevents unauthorized data usage and privacy violationsNIST AI RMF MP-2.1-002; AI 600-1.Compliance with data privacy, intellectual property, and security regulationsInstitutes test and evaluation for data and content flows within the GAI system, including but not limited to, original data sources, data transformations, and decision-making criteria.Protect, Detect
Reduces risks from unreliable or tampered upstream data sources, with partial validation of data submitters but not yet applicable to the builder sideNIST AI RMF MP-2.2-001; AI 600-1.Traceability and ensures data integrity for AI-generated content to validate who submitted the given dataIdentifies and document how the system relies on upstream data sources, including for content provenance, and if it serves as an upstream dependency for other systems.Identify, Detect
Prevents unintended data leaks and content manipulation risksNIST AI RMF MP-2.2-002; AI 600-1.Ensures that external dependencies do not compromise information integrityObserves and analyse how the GAI system interacts with external networks, and identify any potential for negative externalities, particularly where content provenance might be compromisedProtect, Detect
Provides Human-GAI Configuration Oversight to identify issues in human-AI interactionsNIST AI RMF MP-3.4-005; AI 600-1.Implements systems to track and refine human-GAI outcomesEstablishes systems to continually monitor and track the outcomes of human-GAI configurations for future refinement and improvements.Detect, Respond
Prevents exposure of Personal Identifiable information (PII) and sensitive data with AI-Generated Content Privacy MonitoringNIST AI RMF MP-4.1-001; AI 600-1.Ensures AI-generated content adheres to privacy standards and compliance requirementsConducts periodic monitoring of AI-generated content for privacy risks; address any possible instances of PII or sensitive data exposure.Detect, Protect
Creates a unified governance framework for AI systems to align AI security with enterprise governanceNIST AI RMF MP-4.1-003; AI 600-1.Provides GAI Governance IntegrationConnects new GAI policies, procedures, and processes to existing model, data, software development, and IT governance and to legal, compliance, and risk management activities. Identify, Protect
Detects PII & Sensitive Data in AI Output to prevent leakage of confidential dataNIST AI RMF MP-4.1-009; AI 600-1 MapUses automated tools to scan AI-generated text, images, and multimedia for PIILeverages approaches to detect the presence of PII or sensitive data in generated output text, image, video, or audio.Detect, Protect

Measure

Subrosa FeatureFrameworkControlSummaryCSF
Anonymizes data and removes personally identifiable information (PII) to protect privacy in AI content trackingNIST AI RMF MS-2.2-002; AI 600-1.Ensures Content Provenance Data Management with documentationDocuments how content provenance data is tracked and how that data interacts with privacy and security. Consider: Anonymizing data to protect the privacy of human subjects; Leveraging privacy output filters; Removing any personally identifiable information (PII) to prevent potential harm or misuse.Identify, Protect
Provides Privacy-Enhancing Techniques for AI Content and Implements anonymization and differential privacyNIST AI RMF MS-2.2-004; AI 600-1.Reduces risks of AI content attribution to individualsUses privacy-enhancing technologies to prevent AI-generated content from linking back to individualsIdentify, Protect

Manage

Subrosa FeatureFrameworkControlSummaryCSF
Provides Responsible Synthetic Data Use to prevent outbound data privacy violationsNIST AI RMF MG-2.2-009; AI 600-1.Uses privacy-enhancing techniques and synthetic data that mirrors real-world statistical properties without revealing PIIReduces exposure to personal data while maintaining dataset diversityProtect, Detect
Monitors post-deployment risks by tracking outgoing data to AI models, identifying cyber threats and confabulation risksNIST AI RMF MG-4.1-002; AI 600-1 ManageContinuous monitoring processes for AI systems post-deploymentEstablishes, maintains, and evaluates effectiveness of organizational processes and procedures for post-deployment monitoring of GAI systems, particularly for potential confabulation, CBRN, or cyber risks.Detect, Respond

Footnotes

  1. CIO.com 2024 https://www.cio.com/article/648969/shadow-ai-will-be-much-worse-than-shadow-it.html?ref=hackernoon.com\&amp=1

  2. State of Security 2024: The Race to Harness AI- Splunk, https://www.splunk.com/en\_us/campaigns/state-of-security.html

Protect Your Data Today

Subrosa uncovers AI usage risks across your organisation and protects your business from shadow AI and data leaks.