No data is sent to any AI model until you actively use an AI feature. AI can be disabled entirely at the organisation level.
At a glance
| Model training | Your data is never used to train or improve AI models |
| Human review | All AI-proposed changes go through changesets, reviewed and approved by a human |
| Permissions | The agent inherits the exact same access as the user interacting with it |
| Isolation | Each organisation’s data is completely separated — no cross-tenant access. Conversations are private to each user. |
| Data residency | You choose the region where AI processing occurs |
| Model provider | Choose your preferred vendor, or bring your own API keys for end-to-end control |
| Encryption | AES-256 at rest, TLS 1.2+ in transit |
| Audit trail | Complete logs for all AI interactions |
How data flows
The AI agent never applies changes directly. Every edit it proposes goes through a changeset — the same review-and-approve workflow as any human edit in Seal. Nothing is applied without explicit human approval. Only the data relevant to your query is sent to the model — your organisation’s documentation and the entity data you’re working on. The agent cannot access data outside your current context.What the AI agent is used for
The AI agent (Neil) assists scientists and quality professionals with day-to-day work inside Seal:- Drafting content — writing SOPs, protocols, batch records, and other regulated documents
- Summarising data — extracting key findings from experiments, reviews, and audit trails
- Answering questions — navigating your organisation’s documentation and processes
- Filling in records — populating fields and templates based on context
Frequently asked questions
Is our data used to train AI models?
Is our data used to train AI models?
No. Seal only partners with model providers that contractually guarantee your data is never used for training or model improvement. Your data is processed solely to generate responses.
Where does our data reside?
Where does our data reside?
Seal offers configurable data residency — your organisation selects the region where your data is hosted and where AI processing occurs.You can also choose your preferred model provider. If you want end-to-end control, you can provide your own API keys so that all AI requests go directly through your organisation’s account.Message history (your prompts and the model’s responses) is retained by the model provider for a maximum of 30 days for safety and abuse monitoring, after which it is permanently deleted. Beyond this window, your data is not stored outside your cloud tenant.
Where does the model reside?
Where does the model reside?
The language model runs in the model provider’s infrastructure. Seal’s recommended provider is Google Cloud, where the model is hosted in Google’s data centres. If you bring your own API keys or select an alternative provider, the model runs in that provider’s infrastructure instead.In all cases, the model is stateless — it does not retain your data between requests. Each query is processed independently.
What can the AI agent do?
What can the AI agent do?
The agent can read data and suggest content, but all changes it proposes are submitted through changesets — the same review workflow as any other edit in Seal. A human must review and approve every change before it takes effect.The agent inherits the exact same permissions as the user interacting with it. If you have read-only access to an entity or workspace, the agent has read-only access. It cannot escalate privileges or access data outside the user’s scope.
What isolation exists across customers and sites?
What isolation exists across customers and sites?
Complete tenant isolation. Each organisation’s data is entirely separated — the AI agent for one customer can never access another customer’s data, documentation, or conversation history. There is no shared context between organisations.Within an organisation, each AI conversation is isolated to the individual user who initiated it. Conversations are not visible to other users, and each session operates strictly within that user’s permission boundaries.
Can we control or disable AI features?
Can we control or disable AI features?
Yes. You can control AI access at multiple levels:
- Individual users can choose not to use AI features
- Specific workspaces can be configured as AI-free zones for sensitive data
- You can enable only Automations Copilot while disabling data processing
- You can disable AI entirely across your organisation
Audit and compliance
| Audit trails | All AI interactions are logged with user, timestamp, and action |
| GDPR | Supported via Data Processing Addendum with model providers |
| Seal certification | SOC 2 Type II — viewable at our Security Portal |
| Provider certification | Google Cloud: SOC 2 Type II, ISO 27001 |