Artificial Intelligence (AI) is now embedded in day-to-day work across organisations. Companies increasingly use generative AI platforms to assist with policy drafting, correspondence, research, or preparing internal documentation.
While these tools can enhance efficiency, recent developments from the Workplace Relations Commission (WRC) signal a clear warning: AI is not a substitute for legal understanding, and misuse can have significant consequences in employment disputes.
In October 2025, the WRC issued formal guidance on the use of AI in preparing documents for hearings. Combined with the recent case of Oliveira v Ryanair, this guidance emphasises the risks of relying on AI-generated content without proper verification. For employers and HR teams, the message is unambiguous: AI is a tool, not an adviser, and organisations must maintain full accountability for the accuracy of the material they produce.
We highlight key risks and implications for employers and organisations.
1. AI Tools Are Not Legal Advisers
Generative AI tools can support teams by helping draft text, outline processes or clarify general concepts. However, they are not built on, nor do they guarantee access to, the most up-to-date Irish employment and equality law.
The WRC guidance stresses that users remain fully responsible for any text they submit, regardless of whether it was generated by AI. If the information is incomplete, inaccurate, misleading, or entirely fictitious, the submitting party, not the AI provider, bears the consequences.
For employers and HR teams, this means that any AI-assisted drafting must be carefully reviewed, fact-checked, and validated against Irish legislation, Codes of Practice, and organisational policy.
2. The Risk of “AI Hallucinations”: Fabricated Case Law and Misleading Authorities
A major issue highlighted by the WRC is the growing prevalence of AI “hallucinations”, outputs that appear credible but contain fabricated case citations, misquoted legislation, or invented facts. These errors can easily find their way into HR grievance responses, investigation reports, disciplinary outcomes, or submissions to third parties if not detected.
The WRC guidance warns that:
- AI may generate references to case law or WRC decisions that do not exist.
- AI may misquote statutory provisions or present non-Irish legal principles as applicable.
- The language used may sound authoritative, making inaccuracies harder to identify.
These risks materialised directly in Oliveira v Ryanair, where the employee relied on AI to prepare his submission. The Adjudication Officer criticised the wasting of time on “citations that were not relevant, misquoted and in many instances, non-existent”. These undermined the complainant’s credibility and contributed to the dismissal of the complaint.
Such errors can expose a company and an employer to reputational and procedural risk. Any reference to case law or legal principles must be independently confirmed.
3. Data Protection and Confidentiality Concerns
Inputting personal data, sensitive information or commercially confidential details into publicly accessible AI tools poses a data protection risk. The WRC guidance reminds parties that these platforms may store, use, or train on the information entered.
Typical high-risk data includes:
- employee names
- disciplinary or grievance details
- medical or occupational health data
- protected disclosures
- payroll, contract or salary information
- sensitive organisational data
Using AI tools without appropriate controls may breach GDPR and internal data governance requirements.
4. Overconfidence in AI-Generated HR or Legal Content
AI systems can generate text that is coherent and persuasive, but this can mask significant inaccuracy. Employers or HR practitioners may be tempted to rely on AI outputs when preparing investigation reports, drafting sanction rationales, or interpreting statutory obligations. This may create:
- flawed analysis of legal tests
- incorrect assumptions about procedural fairness
- unreliable guidance on disciplinary outcomes or statutory entitlements
- weakened organisational defences in the event of a claim
The WRC guidance explains that AI cannot assess the strengths of a case, interpret the nuances of employment equality law, or evaluate credibility.
5. The WRC Will Expect Accountability and Accuracy
Parties appearing before the WRC must understand the submissions they put forward. The guidance makes clear:
- You must be prepared to stand over all content in your submission.
- You may be questioned on any part of it.
- AI-generated material that is inaccurate may damage your credibility.
For employers, this highlights an operational risk. If internal documentation prepared through AI is inaccurate, inconsistent or unsubstantiated, it may be challenged at a hearing. Adjudication Officers are entitled to disregard irrelevant or misleading material and may give adverse weight to submissions that lack clarity or demonstrate poor procedural rigour.
6. Best Practices for HR Teams Using AI
The WRC guidance sets out several practical steps which employers should integrate into internal HR governance:
a. Verify all legal content: Confirm all references to legislation, Codes of Practice, and case law and use trusted sources.
b. Never rely on AI to determine legal strategy or outcomes
c. Avoid entering personal or sensitive data
d. Ensure you understand all content before relying on it
e. Consider a transparent disclosure statement if AI is used to assist with drafting.
7. What Employers Should Take from Oliveira v Ryanair
This case serves as a timely reminder for employers:
- A robust, well-documented disciplinary process is critical.
- Employers must be able to demonstrate fair procedure.
- AI-generated inaccuracies can materially undermine a party’s case.
- The WRC will not accept submissions, by either party, that are misleading, poorly constructed or based on fictitious legal references.
Ultimately, employers remain responsible for the accuracy of internal HR documents and any material presented at a WRC hearing.
8. Conclusion: Use AI Carefully and Strategically
AI offers genuine value for employers and HR teams, but only when managed within appropriate boundaries. Organisations should treat AI as a drafting aid, never as a legal adviser or a replacement for professional judgement.
The WRC’s 2025 guidance and the Oliveira v Ryanair decision should serve as a catalyst for employers to implement stronger controls, training and governance around AI usage.
Reference:
Workplace Relations Commission (WRC) (2025) Guidance on the use of AI tools to prepare material for submission to the WRC. Online. Available at:
Workplace Relations Commission (WRC) (2025) Oliveira v. Ryanair DAC ADJ-00055225. Available at: https://www.workplacerelations.ie/en/cases/2025/october/adj-00055225.html
Share this article