Organizations Need To Recognize Potential Risks with AI and Develop Strategy with Effective Plans

These risks associated with implementing AI systems must be acknowledged by organizations that want to use the technology ethically and with minimal risk as possible.

Organizations in varied market segments have always had to manage risks associated with new technologies and solutions to support and expand their businesses. Certainly, they must do the same when it comes to implementing AI technology as well.

AI presents similar risks when deploying any new impactful technology in the enterprise including; inadequate strategic alignment to business goals, a lack of overall organizational tech and operational skills to support initiatives and a failure to get buy-in throughout the organizational ranks.

For such challenges, senior management should lean on the best practices that have guided the effective adoption of other technologies.  AI experts and management consultants advise senior leadership to identify areas where AI can help them meet organizational objectives, develop strategies to ensure they have the expertise to support AI program initiatives and create strong change management policies to smooth and speed enterprise adoption.

But, executives are finding that AI technologies in the enterprise can also present unique risks that need to be acknowledged and addressed directly.

Here are some areas of risk that can arise as organizations implement and use AI technologies in the enterprise.

A Lack of Employee Trust Can Shut Down Ai Adoption

Most important is that the organization communicates directly to employee base about the upcoming implementation of new AI technologies. Since not all workers are ready to embrace AI.  Interesting to note that Professional services firm KPMG, in a partnership with the University of Queensland in Australia, found that 61% of respondents to its “Trust in Artificial Intelligence: Global Insights 2023” report are either uncertain about or unwilling to trust AI. Without first building that trust, an AI implementation can be unproductive and problematic, according to experts.

For example, what would happen if workers don’t trust an AI solution on a factory floor that determines a machine must be shut down for maintenance. Even if the AI system is nearly always accurate, if the user doesn’t trust the machine then that AI is a failure.

AI Can Have Unintentional Biases

At its most elementary level, AI takes large volumes of data and then, using algorithms, identifies and learns to perform from the patterns it identifies in the data.  But when the data is biased or problematic, AI produces flawed results. 

Also, problematic algorithms, such as those that reflect the biases of the developers, engineers and scientists – can lead AI systems to produce biased and incorrect objective results.

Biases, Errors Greatly Magnified By Volume of AI Transactions

Certainly, human workers, of course, have biases and make mistakes, but the consequences of their errors are limited to the volume of work they do before the errors are caught — which is often not very much. However, the consequences of biases or ‘hidden errors’ in operational AI systems can be exponentially larger and more problematic.

As experts generally agree, humans might make dozens of mistakes in a day, but a bot handling millions of transactions a day magnifies by millions any single error.

AI Might Be Delusional

It is interesting to note at most AI systems are stochastic or probabilistic.  This means machine learning algorithms, deep learning, predictive analytics and other technologies work together to analyze data and produce the most probable response in each scenario.

But this in contrast to deterministic AI environments, in which an algorithm’s behavior can be predicted from the input.

The majority of real-world AI environments are stochastic or probabilistic, and they’re not 100% accurate. “They return their best guess to what you’re prompting,” explained Will Wong, principal research director at Info-Tech Research Group.

In fact, inaccurate results are common enough — particularly with more and more people using ChatGPT — that there’s a term for the problem: AI hallucinations.

“So, just like you can’t believe everything on the internet, you can’t believe everything you hear from a chatbot; you have to vet it,” Wong advised.

Vetting and confirming accuracy is vital regarding AI delivered results.

AI Can Create Unexplainable Results, Thereby Damaging Trust

Explainability, or the ability to determine and articulate how and why an AI system reached its decisions or predictions, is another term frequently used when talking about AI.  Although explainability is critical to validate results and build trust in AI overall, it’s not always possible — particularly when dealing with sophisticated AI systems that are continuously learning as they operate.

For example, Wong said, AI experts often don’t know how AI systems reached those faulty conclusions labeled as hallucinations.

Such situations can stop the adoption of AI, despite the benefits it can bring to many organizations.

In a September 2022 article, “Why businesses need explainable AI — and how to deliver it,” global management firm McKinsey & Company noted that “Customers, regulators, and the public at large all need to feel confident that the AI models rendering consequential decisions are doing so in an accurate and fair way. Likewise, even the most cutting-edge AI systems will gather dust if intended users don’t understand the basis for the recommendations being supplied.”

AI Can Behave Unethically, Illegally

Some uses of AI might result in ethical dilemmas for their users, said Jordan Rae Kelly, senior managing director and head of cybersecurity for the Americas at FTI Consulting.

“There is a potential ethical impact to how you use AI that your internal or external stakeholders might have a problem with,” she said. Workers, for instance, might find the use of an AI-based monitoring system both an invasion of privacy and corporate overreach, Kelly added.

Others have raised similar concerns. The 2022 White House report also highlighted how AI systems can operate in potentially unethical ways, citing a case in which “STEM career ads that were explicitly meant to be gender neutral were disproportionately displayed by an algorithm to potential male applicants because the cost of advertising to younger female applicants is higher and the algorithm optimized cost-efficiency.”

Employee Use of AI Can Evade or Escape Enterprise Control

The April 2023 “KPMG Generative AI Survey” polled 225 executives and found that 68% of respondents haven’t appointed a central person or team to organize a response to the emergence of the technology, noting that “for the time being, the IT function is leading the effort.”

KPMG also found that 60% of those surveyed believe they’re one to two years away from implementing their first generative AI solution, 72% said generative AI plays a critical role in building and maintaining stakeholder trust, and 45% think it might have a negative effect on their organization’s trust if the correct risk management tools aren’t implemented.

But while executives consider generative AI solutions and safeguards to implement in upcoming years, many workers are already using such tools. A recent survey from Fishbowl, a social network for professionals, found that 43% of the 11,793 respondents used AI tools for work tasks and almost 70% do so without their boss’s knowledge.

Info-Tech Research Group’s Wong said enterprise leaders are developing a range of policies to govern enterprise use of AI tools, including ChatGPT. However, he said companies that prohibited its use are finding that such restrictions aren’t popular or even feasible to enforce. As a result, some are reworking their policies to allow use of such tools in certain cases and with nonproprietary and nonrestricted data.

Enterprise Use Could Run Afoul of Proposed Laws and Expected Regulations

Governments around the world are looking at whether they should put laws in place to regulate the use of AI and what those laws should be. Legal and AI experts said they expect governments to start passing new rules in the coming years.

Organizations might then need to adjust their AI roadmaps, curtail their planned implementations or even eliminate some of their AI uses if they run afoul of any forthcoming legislation, Kelly said.

Executives could find that challenging, she added, as AI is often embedded in the technologies and services they purchase from vendors. This means enterprise leaders will have to review their internally developed AI initiatives and the AI in the products and services bought from others to ensure they’re not breaking any laws.

Hackers Can Use AI to Create More Sophisticated Attacks

Bad actors are using AI to increase the sophistication of their attacks, make their attacks more effective and improve the likelihood of their attacks successfully penetrating their victims’ defenses.

“AI can speed up the effectiveness of the bad guys,” Kelly said.

Experienced hackers aren’t the only ones leveraging AI. Wong said AI — and generative AI in particular — lets inexperienced would-be hackers develop malicious code with relative ease and speed.

“You can have a dialogue with ChatGPT to find out how to be a hacker,” Wong said. “You can just ask ChatGPT to write the code for you. You just have to know how to ask the right questions.”

Poor Decisions Around AI Use Could Damage Reputations

After the February 2023 shooting at a private Nashville school, Vanderbilt University’s Peabody Office of Equity, Diversity and Inclusion responded to the tragic event with an email that included, at its end, a note saying the message had been written using ChatGPT. Students and others quickly criticized the technology’s use in such circumstances, leading the university to apologize for “poor judgement.”

The incident highlights the risk that organizations face when using AI: How they opt to use the technology could affect how their employees, customers, partners and the public view them.

Organizations that use AI in ways that some believe is biased, invasive, manipulative or unethical might face backlash and reputational harm. “It could change the perception of their brand in a way they don’t want it to,” Kelly added.


The risks stemming from or associated with the use of AI can’t be eliminated, but they can be managed.

Organizations must first recognize and understand these risks, according to multiple experts in AI and executive leadership. From there, they need to implement policies to help minimize the likelihood of such risks negatively affecting their organizations. Those policies should ensure the use of high-quality data for training and require testing and validation to root out unintended biases.

Policies should also mandate ongoing monitoring to keep biases from creeping into systems, which learn as they work, and to identify any unexpected consequences that arise through use.

And although organizational leaders might not be able to predict every consideration, experts said enterprises should have frameworks to ensure their AI systems contain the policies and controls to create ethical, clear, fair and unbiased results — with employees monitoring these systems to confirm the results meet the organization’s established governance standards.

Company Contact Information:
Thomas J. Canova
Co-Founder, CMO
Modevity, LLC

Leave a Comment

Trusted partner since 2004.

Other Pages

Quick Links

Get the latest news & updates

Copyright © 2022 All rights reserved.