Overcome Bias in Algorithmic AI among Global Executives and HR

Posted by Marbenz Antonio on April 18, 2023

How to stop computers being biased | Financial Times

The trend of monitoring HR tools and applications for AI bias is gaining momentum globally, primarily due to several international and local data privacy laws and the US Equal Employment Opportunity Commission (EEOC). As a part of this trend, the New York City Council has passed new regulations that mandate organizations to perform annual bias audits on automated employment decision-making tools utilized by HR departments.

Enforcement of the new regulations, passed in December 2021, will necessitate organizations that utilize algorithmic HR tools to perform an annual bias audit. Any organization that fails to comply with the new legislation may be subjected to fines ranging from a minimum of USD 500 to a maximum of USD 1500 per infringement. In anticipation of this transition, specific organizations are establishing a yearly assessment, mitigation, and examination procedure. Here’s a recommendation for how this procedure may be implemented.

Step one: Evaluate

For organizations to have their hiring and promotion systems assessed, it’s essential to take an active approach by educating stakeholders on the significance of this procedure. A diverse team responsible for evaluation, comprising HR, Data, IT, and Legal professionals, can be instrumental in navigating the constantly evolving regulatory framework related to AI. This team should be integrated into the organization’s business processes, and its role should be to assess the entire process from sourcing to hiring and scrutinize how the organization sources, screens, and recruits both internal and external candidates.

The evaluation team must scrutinize and record each system, decision point, and vendor based on the population they cater to, including hourly workers, salaried employees, various pay groups, and countries. While some third-party vendor information may be confidential, the evaluation team should still examine these processes and establish protective measures for vendors. It’s vital for any proprietary AI to be transparent, and the team should strive to promote diversity, equity, and inclusion in the hiring process.

Step two: Impact testing

As governments worldwide enforce regulations about the use of AI and automation, organizations need to assess and revise their processes to ensure compliance with the new mandates. This entails conducting meticulous scrutiny and testing of processes that involve algorithmic AI and automation, taking into account the specific regulations applicable in each state, city, or locality. Given the varying degrees of rules in different jurisdictions, it is crucial for organizations to stay well-informed and adhere to the requirements to mitigate any potential legal or ethical ramifications.

Step three: AI Bias review

Once the evaluation and impact testing has been concluded, the organization can commence the bias audit, which may be mandated by law and should be performed by an impartial algorithmic institute or a third-party auditor. It is crucial to select an auditor with expertise in HR or Talent, who can be trusted to provide explainable AI and who holds RAII Certification and DAA digital accreditation. Our organization is well-equipped to aid companies in becoming data-driven and achieving compliance. If you require any assistance, please don’t hesitate to contact us.

Data and AI Governance’s Role

Having a suitable technology blend can be critical to ensuring an effective data and AI governance strategy, with a contemporary data architecture like data fabric being a vital element. Policy orchestration is an excellent tool within a data fabric architecture that can simplify the intricate AI audit processes. By incorporating AI audit and associated processes into the governance policies of your data architecture, your organization can gain insights into areas that necessitate ongoing scrutiny.

What will happen next?

IBM Consulting has been assisting clients in establishing an evaluation process for bias and other related areas. The most challenging aspect is setting up the initial evaluation and taking stock of every technology and vendor that the organization engages with for automation or AI. Nevertheless, implementing a data fabric architecture can streamline this process for our HR clients. A data fabric architecture offers clarity into policy orchestration, automation, AI management, and the monitoring of user personas and machine learning models.

Organizations must recognize that this audit is not a one-time or stand-alone event. It is not only about the regulations enacted by a single city or state. These laws are part of a sustained trend where governments are intervening to mitigate bias, establish ethical AI use, safeguard private data, and reduce harm resulting from mishandled data. Therefore, organizations must allocate funds for compliance costs and form a cross-disciplinary evaluation team to develop a regular audit process.

Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Verified by MonsterInsights