AI USAGE POLICY

Version: 1.0
Effective from: 2026-04-01
Scope: all users of the Regrally AI tool

1. Purpose

1.1. This policy establishes the rules for lawful, secure, and proportionate use of the Regrally platform (https://www.regrally.com/) in audit and compliance engagements.

1.2. Regrally is used as an AI-assisted working tool only — not as an autonomous decision-maker. Any use involving personal data must remain aligned with confidentiality, data protection, and professional accountability obligations.

2. Scope

2.1. This policy applies to all users who access and use the Regrally platform in the course of audit or compliance work.

2.2. It covers all activities performed in the platform, including:

  • submitting questionnaire responses and uploading evidence;
  • using AI-assisted analysis and reviewing generated outputs;
  • handling comments, draft findings, and exported results;
  • managing workspaces and project settings.

3. Tool Description

3.1. Regrally is an AI-native audit system that allows users to collect information through questionnaires, upload supporting evidence, analyse submitted material, and generate structured audit-ready findings and recommendations for human review. The platform is designed for compliance, audit, legal, and advisory workflows.

3.2. Uploaded content is stored in encrypted form in Amazon Web Services infrastructure located in the European Union. AI-assisted analysis is performed via an API connection to OpenAI, with inference occurring in Sweden using GPT-4.1 with retrieval-augmented generation. Each query uses a new session.

4. Approved Use Cases

4.1. The platform may be used for defined professional tasks, including:

  • collecting and structuring audit evidence;
  • reviewing internal policies, procedures, and documentation;
  • identifying apparent gaps or inconsistencies;
  • generating preliminary observations and draft recommendations for expert review.

4.2. Use must remain within the scope of the relevant engagement, mandate, or internal assignment. Personal, experimental, or unrelated use is not permitted.

5. Core Principles for Use

5.1. Human oversight. All AI-generated findings, summaries, recommendations, and classifications must be reviewed by a qualified person before they are relied upon, shared externally, or incorporated into a final deliverable.

5.2. Supportive use only. The platform may support analysis and drafting, but must not be used as the sole basis for a conclusion that materially affects an identifiable individual.

5.3. Need-to-use. Use of the platform must be limited to users with a legitimate work need and to content that is necessary for the specific project or task.

5.4. Accountability. Users remain accountable for the quality, legality, and professional appropriateness of all work performed with assistance from the platform.

6. Data Input Rules

6.1. Personal data. Users must avoid uploading personal data where this is not necessary for the engagement. Before uploading any content containing personal data, users must consider whether the same purpose can be achieved without it, by redacting personal identifiers, or by using anonymised or pseudonymised material.

6.2. Data minimisation. Only documents, excerpts, answers, and evidence that are relevant to the defined engagement may be uploaded. The scope and volume of uploaded content must be limited to what is strictly necessary.

6.3. Restricted data. Special category personal data and personal data relating to criminal convictions or offences must not be uploaded unless their use is strictly necessary, the legal basis has been documented separately, and appropriate approval has been obtained.

6.4. Children's and vulnerable persons' data. Data concerning children or other vulnerable persons must not be uploaded unless clearly necessary, specifically assessed, and appropriately safeguarded.

6.5. Confidential and privileged material. Users must not upload material in a manner that would breach legal professional privilege, statutory secrecy, confidentiality, or applicable information-classification rules.

7. AI Output Handling

7.1. Verification. Users must verify whether AI outputs are supported by the source material and correct any inaccuracy, overstatement, missing context, or unsupported recommendation.

7.2. No hidden automation. No purely automated decision-making may be introduced through workflow design, scoring logic, or operational practice. Any substantive judgement must remain under human control.

7.3. Appropriate reliance. Outputs may be used as draft material or analytical support, but not as a substitute for legal, compliance, audit, or managerial judgement.

8. Security and Access

8.1. Authorised access only. The platform may be accessed only through approved accounts and authorised devices and networks, unless an exception is formally approved.

8.2. Credentials and permissions. Users must keep credentials confidential, use strong authentication methods where required, and must not share access or circumvent role-based permissions.

9. Retention, Export, and Deletion

9.1. Retention principle. Data and outputs must not be retained in the platform longer than necessary for the engagement or applicable retention obligations.

9.2. Deletion. Where content is no longer required, deletion must be requested in accordance with the platform's deletion functionality.

9.3. Uploaded content may be exported and/or deleted on instruction. The provider deletes uploaded content within 30 calendar days after the end of the contractual relationship; backups are deleted according to the provider's internal backup lifecycle.

10. Data Storage

10.1. Uploaded content is stored securely within the EU/EEA. AI analysis is performed in Sweden (OpenAI). Data is not transferred outside this region.

11. Policy Review

11.1. This policy will be reviewed following any material change to the service and, in any event, at least every two years.