Breadcrumbs

AI: Security Mechanisms in GlobalSuite

QUESTIONS REGARDING SECURITY MECHANISMS IMPLEMENTED IN GlobalSuite® AI

  1. What are the security mechanisms that GlobalSuite® has implemented in its Artificial Intelligence to protect information and ensure data integrity?

Currently, the data used by the system are those manually entered into the system at the time of the query or previously selected from GlobalSuite. The use of the data is exclusively limited to those provided by the clients themselves when using the assistants.

To provide greater clarity, it is important to note that we use Azure OpenAI as a service, where these implementations are deployed within the European region. Below is the following information provided by this service:

Important

The prompts (inputs) and completions (outputs), insertions, and training data:

  • ARE NOT available to other customers.

  • ARE NOT available to OpenAI.

  • ARE NOT used to improve OpenAI models.

  • ARE NOT used to train, retrain, or improve the base models of Azure OpenAI Service.

  • ARE NOT used to improve any Microsoft or third-party product or service without your permission or instruction.

  • The fine-tuned Azure OpenAI models are exclusively available for your use.

The Azure OpenAI service is operated by Microsoft as an Azure service; Microsoft hosts the OpenAI models in Microsoft's Azure environment, and the service does not interact with any service operated by OpenAI (e.g., ChatGPT or the OpenAI API).

In the case of the Conversational Assistant, Retrieval-Augmented Generation (RAG) techniques are used, based on internal documentation developed by the product team, without including at any time personal information or data related to clients.

  1. How exactly does AI work within the platform, and in which processes is it applied?

Currently, the Risk and Control Recommender is available, which is applied to risk analysis. This functionality is completed with data entered by the client in an anonymized manner to obtain the applicable risks and controls.

Additionally, there is a transversal conversational assistant in GlobalSuite to answer questions about the use of the solution and questions about regulatory issues. This functionality is fed solely by internal product documentation.

Another specific assistant focused on the Document Manager functionality allows selecting a file from the Management System to make queries about it, such as requesting a summary of the content, improvement suggestions, comparison against a regulation, among others.

The latest current assistant is the Compliance assistant, focused on making queries and improvements on a regulation previously uploaded to GlobalSuite®, allowing the automation of the applicability identification of certain requirements, detecting compliance gaps, or proposing improvement actions to increase the level of adequacy.

  1. Where does GlobalSuite® AI obtain the information or data it uses for its operation and analysis?

It is necessary to differentiate the use case of each of the available functionalities:

For the Conversational Assistant, Retrieval-Augmented Generation (RAG) techniques are used based on internal documentation prepared by the product team.

Regarding the Risk and Control Recommender, the context information or data is manually entered and sent by the user in a controlled manner. The system suggests a list of risks or controls that can be added to the analysis under the user's decision. However, it is important to highlight that these risks are incorporated into the Risk Analysis and not into the pre-existing catalogs. This ensures a separation between both modes of proposing risks and their respective management.

Likewise, the engine for this Risk Recommender does not store or learn from user interactions. Each request made is processed independently. Thus, there is no feedback or learning from the data entered by users, and the responses provided are not influenced by previous queries.

For the Document Manager assistant, the input information is the document selected by the user, in addition to the queries made textually. This information is used for the query made at that moment, not being stored in the system or exploitable in the future.

In the case of the Compliance assistant, the information used is the regulatory assessment on which queries are made. Specifically, the requirements, their applicability, compliance level, justification, associated controls, and non-conformities are obtained. The information is processed and sent when accessing the assistant and is not used for its learning.

  1. Regarding the AI functionality for proposing controls and risks, are there any limitations on the number of queries that can be made or the scenarios that can be evaluated?

At this time, a limitation has been implemented for this functionality, setting a maximum of 60 query generations per company per month.

  1. Regarding the AI functionality for conversational AI assistants, are there any limitations on the number of queries that can be made or the scenarios that can be evaluated?

The service is limited to 1.5M tokens (amount of information measured by the AI engine in each processing). These tokens are consumed based on questions and answers, where approximately 450 questions and answers can be counted. This token-per-question method is not an exact procedure, so it will depend on the size of the questions and the size of the answers.

For the Document Manager assistant, the same service consumption limitation of 1.5M tokens per month applies, which approximately allows the analysis and consultation of between 100 and 150 documents of 20 pages.

And in the case of the Compliance assistant, the monthly limitation of 1.5M tokens, applied both to inputs (questions and context information) and outputs (assistant's response), is computed in the same way.

  1. Is the user clearly informed about the use of Artificial Intelligence in GlobalSuite®?

In accordance with the principle of transparency, the user is informed that any GlobalSuite® functionality that uses Artificial Intelligence will be communicated through a specific AI usage notice.
Additionally, such use is already regulated in the terms of service, where the scope and nature of its use are clearly and thoroughly established.