The Use of AI in Corporate Governance
- 1. How companies use AI in corporate governance
- 1.1. HR and personnel management
- 1.2. Financial control and risk management
- 1.3. Strategic planning and corporate analytics
- 2.2. Obligations regarding provability and transparency of decisions
- 2.3. The risk arising from the absence of centralised AI governance within an organisation
- 3.2. Practical elements for ensuring explainability
Artificial intelligence (AI) has become an established part of corporate governance. Companies are integrating AI into decision-making, automation, and strategic analytics processes. At the same time, AI regulation continues to evolve, and instruments such as the requirements of the EU AI Act, which are entering into force, are placing greater emphasis on algorithmic transparency, oversight, and accountability in the use of automated decision-making.
As a result, corporate governance is faced with a dual task: to derive benefit from the opportunities offered by AI while at the same time building a system of controls that ensures compliance with legal requirements and trust in algorithmic decisions.
1. How companies use AI in corporate governance
1.1. HR and personnel management
AI is increasingly used to automate the search for and assessment of candidates, analyse employee performance, plan personnel development, and forecast workforce risks. Such systems make it possible to accelerate processes and increase their objectivity; however, at the same time they create legal challenges.
Key risks:
- the likelihood of discrimination resulting from improperly trained models;
- insufficient transparency of algorithmic decisions;
- the use of personal data without an appropriate legal basis;
- breach of the employer’s obligations when automating employee assessment.
Some jurisdictions are already imposing requirements regarding the transparency of algorithms in employment relationships and the procedure for notifying individuals about the use of automated decision-making in the field of employment.
| Recommendations: |
|---|
|
1.2. Financial control and risk management
Organisations use AI to analyse transactions, identify anomalies, support management accounting, and assess financial risks. Automation makes it possible to identify problems promptly and take decisions, but it requires reliable control procedures.
Legal regulation in the area of high-risk algorithms, including the requirements of the EU AI Act, imposes obligations relating to transparency, technical documentation, logging, and oversight of systems used in areas affecting the financial interests of organisations and clients.
| Recommendations: |
|---|
|
1.3. Strategic planning and corporate analytics
AI is used to analyse market data, model business scenarios, and generate strategic recommendations. Such tools help management to take decisions on the basis of deeper data analysis.
However, the automation of strategic functions requires increased attention to corporate oversight and compliance with regulatory requirements. Regulators emphasise the need to incorporate the principles of accountability, transparency, and the ethical use of AI into corporate governance processes.
Accordingly, AI is capable of improving strategic management, but only provided that a control system is built that makes it possible to monitor the correctness and substantiation of algorithmic conclusions.
2. Legal implications of delegating management functions to algorithms
2.1. Responsibility of management bodies
Automation involving AI does not relieve persons involved in the management of a company of the duty to exercise oversight and verify the soundness of the decisions taken. Algorithms are regarded as auxiliary tools, while legal responsibility for decisions remains with the persons who take them. Management must ensure that it understands the operating principles of the systems on which it relies.
Legal scholarship in the field of corporate governance notes the need to integrate AI into corporate procedures in such a way that the principles of good governance and due care on the part of management are observed.
| Recommendations: |
|
2.2. Obligations regarding provability and transparency of decisions
Regulatory requirements for systems capable of having a significant influence on decisions are steadily becoming more stringent.
The AI Act and reforms of national legislation impose on organisations the obligation to:
- document the functioning of AI systems;
- ensure that decisions can be explained;
- retain logs for subsequent analysis.
This affects HR decisions, financial processes, KYC procedures, and other areas where automation may have significant consequences.
| Recommendations: |
|
2.3. The risk arising from the absence of centralised AI governance within an organisation
In practice, a situation often arises in which AI governance functions are distributed among several departments — IT, legal, security, and analytics. This may result in the absence of a unified approach and insufficient oversight.
Research into governance approaches emphasises the need for centralised AI governance, including uniform principles, procedures, and allocation of responsibility.
Accordingly, the optimal solution is to create an internal coordination mechanism or a separate structure responsible for issues of ethics, transparency, compliance, and AI audit.
3. Transparency and explainability of AI decisions
3.1. The importance of transparency in contemporary regulation
The level of trust in algorithms depends directly on their explainability. Regulators pay particular attention to ensuring the transparency and documentation of decisions, especially in systems that may affect the rights, obligations, or significant interests of individuals and legal entities.
Despite technological complexity, transparency is becoming a regulatory obligation in the context of the tightening requirements of the AI Act and national legislation.
3.2. Practical elements for ensuring explainability
In order to ensure an appropriate level of explainability, organisations should:
- describe the logic of how models operate;
- document data sources and methods of data processing;
- prepare internal reports on the operation of systems;
- ensure the accuracy and regular updating of documentation.
| Recommendations: |
|---|
|
Accordingly, companies can use AI effectively in corporate governance if, at the same time, they ensure the transparency of algorithms, oversight of their operation, and compliance with regulatory requirements, including the rules of the AI Act aimed at increasing the accountability and explainability of automated decisions.
Authors: Daria Gordey, Artem Handriko.