Artificial intelligence is no longer the stuff of science fiction; it is an integrated and rapidly expanding feature of the modern workplace. From automated resume screeners to sophisticated performance monitoring software, AI is reshaping how businesses operate and how individuals experience their careers. For both employers and employees in Calgary and across Alberta, this technological shift introduces a host of new opportunities and complex legal challenges. Understanding how AI intersects with established provincial employment legislation is crucial for navigating this new frontier responsibly and in compliance with the law.

This blog post examines the impact of AI on key areas of the employment relationship, exploring the issues from both the employer’s and employee’s perspectives through the lens of Alberta’s primary legal frameworks: the Employment Standards Code, the Personal Information Protection Act (PIPA), and the Alberta Human Rights Act.

The Digital Handshake: AI’s Role in Hiring and Recruitment

The hiring process is often the first point of contact between a potential employee and a company’s AI systems. Many organizations now use AI-powered software to sift through hundreds or even thousands of applications, shortlisting candidates based on predefined criteria.

An Employer’s Perspective

For employers, the appeal is obvious. AI promises efficiency, cost savings, and the ability to identify top-tier candidates by analyzing data points that a human recruiter might miss. The goal is to develop an objective, data-driven process that eliminates unconscious human bias from the initial screening.

However, this supposed objectivity is a significant legal pitfall. If an AI tool is trained on historical company data that reflects past biases, the algorithm will learn and perpetuate those same biases. For example, if a company has historically hired more men for leadership roles, an AI trained on that data may inadvertently penalize qualified female applicants. This could lead to a systemic discriminatory practice, putting the employer in direct violation of the Alberta Human Rights Act. This Act prohibits discrimination based on protected grounds such as gender, age, race, religious beliefs, and physical or mental disability. An employer cannot delegate their legal obligation to provide a discrimination-free hiring process to a machine.

An Employee’s Perspective

For job applicants, interacting with AI recruitment systems can be a frustrating and opaque experience. A rejection may come with no feedback, leaving the candidate to wonder if they were assessed fairly or simply filtered out by an algorithm that misunderstood their resume.

The core legal issue for applicants is the right to be assessed on merit, free from discrimination. If a candidate suspects they were screened out based on a protected ground (for instance, an algorithm that flags resume gaps, which could disproportionately affect women who took parental leave, or one that down-ranks older workers based on graduation dates), they may have grounds for a human rights complaint. The challenge, however, lies in proving that the AI was the source of the discrimination, a difficult task without transparency into how the algorithm functions.

The Algorithmic Overseer: Performance Monitoring and AI

Once an employee is hired, AI’s role can shift to monitoring productivity and managing performance. This can range from tracking computer activity and email response times to analyzing customer service call sentiment and even monitoring employee movements within a physical workspace.

An Employer’s Perspective

Employers may implement these tools to optimize workflows, ensure compliance with company policies, and gather objective data for performance reviews. The argument is that data-driven insights can help identify both high-performing employees and those who may need additional support or training.

The primary legal framework governing this practice in Alberta is the Personal Information Protection Act (PIPA). PIPA requires that private-sector organizations only collect, use, or disclose personal information for purposes that a reasonable person would consider appropriate in the circumstances. Furthermore, employers must notify employees about the information being collected and the reasons for doing so.

An employer cannot engage in unfettered surveillance. They must be prepared to justify the nature and extent of the monitoring as reasonable and necessary for legitimate business purposes. For example, is continuous keystroke logging for an office administrator a reasonable measure? A court or privacy commissioner may find that it is not, deeming it an excessive intrusion into employee privacy.

An Employee’s Perspective

For employees, constant AI-driven monitoring can feel invasive and create a high-stress environment. The fear of being judged by an algorithm based solely on quantitative metrics, without consideration for context or qualitative contributions, can erode morale and trust.

Under PIPA, employees have a right to know what personal information their employer is collecting, how it is being used, and to whom it is being disclosed. The collection must be reasonable. If an employee feels the surveillance is excessive and not clearly tied to their job functions, they may have grounds for a complaint under PIPA. In extreme cases, a work environment made intolerable by oppressive and unreasonable electronic monitoring could potentially form the basis for a constructive dismissal claim, where the employee resigns and argues the employer’s actions effectively terminated their employment.

Can a Robot Fire You? AI in Disciplinary and Termination Decisions

One of the most contentious applications of AI in the workplace is its use in making or heavily influencing decisions about discipline and termination. An AI system might automatically flag an employee for termination after they fall below a certain performance threshold for a set period.

An Employer’s Perspective

From an employer’s standpoint, using AI in these decisions may seem like a way to enforce company policy consistently and without personal favouritism. The data appears to speak for itself, providing a clear and objective justification for the decision.

However, this approach is fraught with legal peril. In Alberta, terminating an employee without providing reasonable notice or pay instead of notice is a wrongful dismissal, unless the employer has “just cause.” Relying solely on an algorithm’s output to establish just cause is exceptionally risky. AI systems cannot consider context. For instance, an algorithm would not understand that an employee’s sudden drop in performance was due to a recently diagnosed medical condition requiring accommodation, or a family emergency. A decision made without this human context would likely be deemed a wrongful dismissal. The common law duty to conduct a fair investigation and consider all circumstances before alleging just cause cannot be satisfied by an algorithm. The requirements for providing termination pay or notice under the Alberta Employment Standards Code remain firmly in place, regardless of the data used to inform the decision.

An Employee’s Perspective

Being terminated by an automated system is a dehumanizing experience. It denies the employee the opportunity to explain, provide context, or seek clarification. If the data used by the AI was inaccurate or incomplete, the employee is left with little recourse in the moment.

An employee terminated in this manner would have substantial grounds to launch a wrongful dismissal claim. Their legal position would be that the employer failed to provide adequate notice and that the data-driven reason did not amount to just cause. The employer would bear the burden of proving that the termination was legally sound, a difficult task if their primary evidence is an algorithmic output devoid of human oversight and contextual understanding.

Accommodation and Accessibility: Reconciling AI with Human Needs

The employer’s duty to accommodate employees with protected characteristics under the Alberta Human Rights Act is a cornerstone of employment law. This duty extends to the point of undue hardship and must be taken into account when implementing any AI system.

AI can be a powerful tool for enhancing accessibility, offering solutions such as voice-to-text software for employees with mobility impairments or scheduling tools that help manage modified work hours. However, it can also create barriers. A rigid, AI-driven performance management system that does not account for an employee who needs modified duties due to a disability could be inherently discriminatory. An employer cannot claim that their AI system is unable to accommodate an employee; the legal duty rests with the employer, not their software. The need for human intervention to ensure legal obligations regarding accommodation are met is paramount.

Proactive Strategies for the AI-Driven Workplace

The integration of AI into the workplace is not a passing trend. As this technology becomes increasingly sophisticated, its impact on the employment relationship will continue to grow.

The Future of AI for Employers

The path forward requires a proactive and legally informed approach. It is not enough to simply purchase and implement a new piece of software. It is essential to conduct due diligence to ensure AI tools are not built on biased data. Clear, transparent policies must be developed to inform employees about how AI is being used to monitor them or make decisions affecting their employment, consistent with PIPA. Most importantly, human oversight must remain a critical part of the process. AI should be treated as a tool to assist human decision-makers, rather than replace them, especially in sensitive areas such as hiring, discipline, and termination.

The Future of AI for Employees

Awareness is key. Understand your rights to privacy under PIPA and to be free from discrimination under the Alberta Human Rights Act. Read company policies carefully and do not hesitate to ask questions about how technology is being used to manage your work. If you believe a decision affecting you was made unfairly by an automated system, it is essential to understand your legal options.

Getz Collins and Associates: Providing Trusted Employment Law Services in Calgary

If you have questions about how AI-driven tools are affecting your workplace rights, hiring practices, privacy, or termination risks, the employment lawyers at Getz Collins and Associates can help. We advise both employees and employers on navigating the legal complexities of artificial intelligence under the Employment Standards Code, PIPA, and the Alberta Human Rights Act. Contact our team online or call (587) 391-5600 to book a confidential consultation on your employment law issue today.