Skip to main content

Co-pilot: privacy and security

Lucas Domeij avatar
Written by Lucas Domeij
Updated this week

We have summarised the frequently asked questions about using the Aboard Co-pilot feature.


Co-pilot is a set of AI-powered features in Aboard that helps you with your daily HR tasks. Co-pilot can answer questions about your organization, help you understand workforce data, and assist with common HR queries.

Co-pilot is provided as an Add-on feature, meaning that only Company Admins can activate it.

This ensures an active decision-making process when beginning to use the feature, and allows you to disable it at any time. When activated, admins can decide which Co-pilot capabilities should be available. Please see our Co-pilot overview for the full list of available features.

Now, let's provide you with answers to the questions that brought you to this article.


If we use Aboard Co-pilot, will our personal data be transferred outside Europe?

No.

If you use Aboard Co-pilot, all personal data processing stays within Europe (the European Economic Area (EEA)+ Switzerland).

Aboard Co-pilot processes and stores all data in the EU (Ireland). When you use Co-pilot features, OpenAI processes your data in Europe (EEA + Switzerland) through OpenAI Ireland Limited.

This means that your employee data never leaves Europe when using Aboard Co-pilot.


What steps is Aboard taking to ensure that Co-pilot complies with the requirements of the EU AI Act?

The EU AI Act takes effect in different phases. Aboard Co-pilot is compliant with the requirements of the EU AI Act that are already in force, such as the prohibition against AI with unacceptable risk.

Based on the data input by the Customer into Aboard, Aboard Co-pilot features are designed as assistive tools. They analyze your information and provide suggestions only. Aboard Co-pilot does not make automated decisions about employment, performance, or other HR matters. No automated decisions with legal implications are made when using the AI. Human oversight is required, and you, the user, will remain in control of all decisions.

Aboard develops and provides these AI features in line with our Responsible AI Principles


How does Aboard protect the information shared when using Co-pilot?

Security standards

We have reviewed OpenAI's data protection and data security measures. OpenAI operates according to a structured security program, their security program and controls are audited by an external party according to the SOC 2 standard. See https://trust.openai.com/

We have chosen to only use OpenAI's Enterprise API (meaning the public version of ChatGPT is not being used) to ensure that OpenAI processes all the data they receive from us as a data processor, and does not use it for any of their own purposes (e.g., training their AI models).

We have entered into an agreement with OpenAI regarding our use of their service, including a Data Processing Agreement (DPA) that meets the GDPR's requirements. Please see the DPA here.


Encryption

All communication between Aboard and OpenAI is encrypted with TLS v1.2, and all data is encrypted at rest with AES-256.


Retention

We have implemented Zero Data Retention (ZDR), which means that both the API input and output are not logged by OpenAI (for abuse monitoring purposes). We have ensured that the API input and output for our Aboard Co-pilot features are not retained in the application state by OpenAI.

In practice, this means:

  • Your data is not stored by OpenAI after processing

  • Your data is not used to train AI models

  • API input and output are not logged by OpenAI

  • No data is retained after the request completes

The results received by Aboard from OpenAI will be stored in the Aboard service as part of your conversation history (which is fully controlled by you as the customer). You can control how long this data is retained via your company’s data retention settings.


Architecture

The OpenAI API is stateless, meaning no data or context is retained between requests, which ensures that responses are generated based solely on the current input without influence from prior interactions.

In practice, this means each prompt/call to OpenAI is made in separate HTTP requests. A prompt/call to OpenAI from Aboard will only contain information associated with one customer. OpenAI API does not share any information or state between different requests/calls. I.e. customer data is isolated on request/call level. Each interaction or request made to OpenAI does not retain any memory of previous interactions.


Multi-layer security (defense in depth)

Aboard Co-pilot uses a defense-in-depth approach with multiple security layers:

Layer

Protection

Authentication

Users must be logged in with a valid account

Authorization

Policy-based access control ensures users only access permitted features

Feature flags

Features can be enabled/disabled at the company level

Input guardrails

AI-powered screening blocks malicious or inappropriate requests

Data scoping

All data is automatically filtered to the current company

Read-only access

Co-pilot cannot create, update, or delete any records

Query limits

Timeouts and row limits prevent abuse


Optional feature

Last but very important - Aboard Co-pilot features are offered as opt-in features, meaning that you (as a customer) need to take active actions to use the features and can at any time disable them, all to ensure that no user "accidentally" starts using these features.


If we use Aboard Co-pilot, will our data be used to train other companies' AI?

No.

OpenAI acts as Aboard’s sub-processor and is contractually required to use your data only to provide the Co-pilot service to you — not for their own purposes, such as training their AI models.

Aboard does not use your data to train AI models. Data from one customer is never used for or applied to another customer’s experience.

If you choose to submit feedback or bug reports about Co-pilot, we may use that input to correct and improve the service. Even in those cases, we do not use your data or feedback to train the AI model or technology.


How does Aboard ensure Aboard Co-pilot's results are relevant and models are up-to-date?

For Aboard Co-pilot, we use OpenAI's pre-trained GPT models through their API(For more information about the API see here ).

We don't train the models ourselves - we create prompts to generate the content needed for the Aboard Co-pilot features.

For this reason, it is not Aboard but OpenAI who actually manages the AI models, and trains and updates them. OpenAI has signed the EU Commission's General-Purpose AI Code of Practice, enabling companies to comply with the AI Act legal obligations on safety, transparency, and copyright of general-purpose AI models. To access information from Open AI about their development practices see here and here.

We have created responsible AI principles that we adhere to when providing our Co-pilot feature. You can read more about them here.


How does Aboard ensure Co-pilot doesn't deliver false information?

We formulate our prompts in a way that minimizes the risk of Aboard Co-pilot to deliver incorrect information. The system is instructed to:

  • Only use data from your company's Aboard account

  • Present factual information based on actual records

  • Avoid speculation or assumptions

  • Clearly indicate when information is not available

It's also worth noting that issues with false information will likely decrease over time, with new and updated AI models. As the Large Language Models (LLMs) improve in general, they become better at identifying and reducing false information.

However, as it's not possible to guarantee that the information created in Aboard Co-pilot is always correct and complete, we strongly encourage our customers and their users to verify the accuracy of the feature's output. This, and other legal and quality assurance aspects, is highlighted when a customer activates Aboard Co-pilot in the service.


Your data stays within your company

The Aboard Co-pilot can only access data belonging to your company. It cannot see or query data from other companies using Aboard.

  • All queries are automatically filtered by your company

  • The company filter is applied server-side and cannot be bypassed

  • Cross-company data access is technically impossible


What data can Aboard Co-pilot access?

Co-pilot can access aggregated and non-sensitive employee data within your company:

Data type

Examples

Headcount & demographics

Employee counts, gender distribution, age ranges

Organization structure

Departments, locations, teams, reporting lines

Employment info

Start dates, tenure, employment type

Time off

Absence trends, leave balances

Performance

Review completion rates, average scores

Engagement

eNPS scores (anonymized)

Salary analytics

Aggregated salary data by department, location, etc.


What data is protected from Aboard Co-pilot?

Sensitive and personal data is excluded from all queries:

Protected data

Why it's protected

National ID / SSN

Personally identifiable information

Passport details

Personally identifiable information

Individual salaries

Confidential compensation data (aggregated analytics are available)

Termination notes

Private HR information

Medical information

Health data privacy

Survey comments

Anonymity preservation

eNPS respondent identity

Anonymity preservation


Anonymization

Some data is anonymized to protect employee privacy:

  • eNPS responses: Individual responses cannot be linked to employees

  • Survey responses: Comments and respondent identity are hidden

  • Small groups: Data is not shown when groups are too small to ensure anonymity (minimum 3 responses for breakdowns)


Read-only access

Co-pilot can only read data. It cannot:

  • Create, update, or delete any records

  • Modify employee information

  • Change system settings

  • Make any changes to your Aboard account


What personal data will OpenAI be processing if we use Aboard Co-pilot?

OpenAI will be processing different types of personal data depending on which Co-pilot feature you decide to use.

Please be aware that Aboard Co-pilot, like all of our services, is under continuous development, and the data points shared are likely to change over time. However, please see below table to get an overview of the data shared per feature:

Co-pilot feature

Data processed when enabled

Co-pilot (Admin chat)

User prompts, messages, and attachments; HR information limited by permissions (name, role, department, work location, absence status); company policy content; identifiers (name, email) if included in input

HR Assistant (Portal)

Same as Co-pilot, scoped to the employee's own permissions

Co-pilot Documents

Content from documents you provide (for summarization); prompts and identifiers you include

Co-pilot Expenses

Receipt/document content to populate expense fields (merchant, amount, date, etc.); prompts and identifiers you include

Co-pilot Performance reviews

Performance review content you provide (for summarization); prompts and identifiers you include

Analytics Co-pilot

HR information limited by permissions for analytics queries; prompts and identifiers you include


Can an employee choose whether their personal data will be used in Aboard Co-pilot?

For many of the Co-pilot features, employee personal data is not processed when the feature is used. In the features where employee personal data is used, the following applies:

  • HR Assistant: Shows information the user is already permitted to view. Employees can't opt out of being included in responses about team composition or out-of-office status while being active in the account.

  • Analytics Co-pilot: Uses aggregated and anonymized data. Individual employees cannot be identified in most analytics queries.

  • AI Expense Parsing: Employees choose to upload their own receipts. They can submit expenses manually without using AI parsing.


Access control

The access to Aboard Co-pilot features are role based and only available to authorized users:

Feature

Who can access

HR Assistant (Portal)

Employees with portal access

Analytics Co-pilot

Admins only

AI Expense Parsing

Employees with expense submission permissions


Query limits and safeguards

To prevent abuse and ensure system stability:

  • Queries time out after 30 seconds

  • Results are limited to prevent excessive data retrieval

  • Only read operations are allowed

  • Input length is limited to 500 characters per message


Input guardrails

User messages are screened before being processed, to detect and block:

  • Requests for data the user is not authorized to access

  • Requests for sensitive personal data (SSN, passport numbers, individual salaries)

  • Attempts to modify or delete data

  • Prompt injection or manipulation attempts

  • Requests unrelated to HR analytics

The guardrails are permissive by default — they allow legitimate HR questions and only block clear threats or inappropriate requests.


Do you still have questions?

Please reach out to [email protected] or your dedicated Customer Success Manager.


Related articles


Last updated: February 2026

Did this answer your question?