Skip to main content
Seal integrates AI capabilities to enhance productivity while maintaining strict data protection. Your data is never used to train AI models, and all AI-generated content belongs to you.
No data is sent to any AI model until you actively use an AI feature. AI can be disabled entirely at the organisation level.

At a glance

Model trainingYour data is never used to train or improve AI models
Human reviewAll AI-proposed changes go through changesets, reviewed and approved by a human
PermissionsThe agent inherits the exact same access as the user interacting with it
IsolationEach organisation’s data is completely separated — no cross-tenant access. Conversations are private to each user.
Data residencyYou choose the region where AI processing occurs
Model providerChoose your preferred vendor, or bring your own API keys for end-to-end control
EncryptionAES-256 at rest, TLS 1.2+ in transit
Audit trailComplete logs for all AI interactions

How data flows

The AI agent never applies changes directly. Every edit it proposes goes through a changeset — the same review-and-approve workflow as any human edit in Seal. Nothing is applied without explicit human approval. Only the data relevant to your query is sent to the model — your organisation’s documentation and the entity data you’re working on. The agent cannot access data outside your current context.

What the AI agent is used for

The AI agent (Neil) assists scientists and quality professionals with day-to-day work inside Seal:
  • Drafting content — writing SOPs, protocols, batch records, and other regulated documents
  • Summarising data — extracting key findings from experiments, reviews, and audit trails
  • Answering questions — navigating your organisation’s documentation and processes
  • Filling in records — populating fields and templates based on context
All outputs are suggestions. Every change the agent proposes is submitted as a changeset — the same review-and-approve workflow as any human edit. Nothing is applied without explicit human approval.

Frequently asked questions

No. Seal only partners with model providers that contractually guarantee your data is never used for training or model improvement. Your data is processed solely to generate responses.
Seal offers configurable data residency — your organisation selects the region where your data is hosted and where AI processing occurs.You can also choose your preferred model provider. If you want end-to-end control, you can provide your own API keys so that all AI requests go directly through your organisation’s account.Message history (your prompts and the model’s responses) is retained by the model provider for a maximum of 30 days for safety and abuse monitoring, after which it is permanently deleted. Beyond this window, your data is not stored outside your cloud tenant.
The language model runs in the model provider’s infrastructure. Seal’s recommended provider is Google Cloud, where the model is hosted in Google’s data centres. If you bring your own API keys or select an alternative provider, the model runs in that provider’s infrastructure instead.In all cases, the model is stateless — it does not retain your data between requests. Each query is processed independently.
The agent can read data and suggest content, but all changes it proposes are submitted through changesets — the same review workflow as any other edit in Seal. A human must review and approve every change before it takes effect.The agent inherits the exact same permissions as the user interacting with it. If you have read-only access to an entity or workspace, the agent has read-only access. It cannot escalate privileges or access data outside the user’s scope.
Complete tenant isolation. Each organisation’s data is entirely separated — the AI agent for one customer can never access another customer’s data, documentation, or conversation history. There is no shared context between organisations.Within an organisation, each AI conversation is isolated to the individual user who initiated it. Conversations are not visible to other users, and each session operates strictly within that user’s permission boundaries.
Yes. You can control AI access at multiple levels:
  • Individual users can choose not to use AI features
  • Specific workspaces can be configured as AI-free zones for sensitive data
  • You can enable only Automations Copilot while disabling data processing
  • You can disable AI entirely across your organisation

Audit and compliance

Audit trailsAll AI interactions are logged with user, timestamp, and action
GDPRSupported via Data Processing Addendum with model providers
Seal certificationSOC 2 Type II — viewable at our Security Portal
Provider certificationGoogle Cloud: SOC 2 Type II, ISO 27001

Further reading

Seal’s recommended AI provider is Google Cloud. Similar data protection agreements are in place with all supported providers. For broader platform security, see Data storage and security, Infrastructure, and Handling confidential data. For questions about AI governance or additional security controls, contact support@seal.run.