According to a recent report by the provider of enterprise security products and services BlackBerry, 75% of enterprises globally have already banned ChatGPT and other generative AI applications from being used at work or are considering doing so. However, the efficiency of such prohibitions was questioned by experts by TechNewsWorld.
The study, which was based on a poll of 2,000 IT decision-makers in North America, Europe, and Asia conducted by OnePoll, also discovered that 61% of organizations that have implemented bans or are considering doing so have long-term or permanent plans for them, with concerns about data security, privacy, and corporate reputation driving those decisions.
According to John Bambenek, a lead threat hunter at San Jose, California-based Netenrich, an IT and digital security operations company, “Such bans are essentially unenforceable and do little more than to make risk managers feel better that liability is being limited.”
He told FluidGeek, “History teaches us that when tools are available that enhance worker productivity or quality of life, workers find a way to use them regardless. Security teams simply cannot protect the data if the usage is outside of the organization’s visibility or rules. “Every employee has a smartphone, so bans don’t necessarily work very well,” observed J. P. Gownder, vice president and lead analyst at Cambridge, Massachusetts-based market research firm Forrester Research.
“The reason employees use these tools is to be more productive, to speed up their efficiency, and to find answers to questions they can’t answer easily,” he said to FluidGeek.
Gownder advised employers to offer tools that are needed by their staff and have received corporate approval. By doing this, he continued, “they can architect generative AI solutions for the workforce that are safe, that employ methods to reduce hallucination, and that can be audited and traced after use.”
Bans on all AI Dangerous
Greg Sterling, co-founder of the news, commentary, and analysis website Near Media, emphasized that businesses that outright exclude AI do so at their own peril. He told FluidGeek, “They run the risk of missing out on the efficiency and productivity gains of generative AI.”
The use of AI tools “won’t be completely banned,” he said. “Within a very short time, AI will be a part of almost all SaaS tools.”
Companies can’t actually fully regulate how their employees use their devices, Sterling continued. Instead than only enforcing bans, “they need to better educate employees about the risks associated with the usage of certain apps.”
Nate MacLeitch, the creator and CEO of QuickBlox, a provider of communication solutions, questioned the follow-through of businesses that informed surveyors they intended to enact prohibitions.
He told FluidGeek, “I think 75% is greater than it will be in reality. There will be controls in place, but a lot of the generative AI technology will be integrated into the software and services that enterprises employ.
A security awareness training company in Clearwater, Florida called KnowBe4 has a defense evangelist named Roger Grimes. “Ultimately, a total ban on a new, growing, beloved technology isn’t going to work completely,” he continued.
According to him, bans won’t stop the technology from developing ways to go past them and prosper. “It might actually work in preventing the leak of confidential information, but the technology itself is going to thrive and grow around any bans,” he said to FluidGeek.
He argued that bans could expose an organization to a competitive risk. “The bans will have to come down, or else the organization won’t be living or thriving,” he said. “Competitors will recognize competitive benefits from AI, and they will.
Impractical Strategy
Bans on utilizing generative AI at work are impractical, according to John Gallagher, vice president of Viakoo, a provider of automated IoT cyber hygiene in Mountain View, California. This is especially true at this moment in the technology’s development, when its applications are fast evolving.
He questioned, “Should an organization forbid the use of Bing because its search results use generative AI?” Employees want to know if they can continue to use Zoom despite the fact that new features use generative AI or if they have to stick to versions of the app that don’t include such features.
According to Gallagher, such bans “are nice in theory but practically cannot be enforced.”
He believed that a ban can be more detrimental to an organization than beneficial. He asserted that “controls that cannot be strictly defined or enforced will eventually be disregarded by employees and undermine future efforts to enforce such controls.” Because they can harm a person’s reputation, loosely defined prohibitions should be avoided.
Why Should AI Be Banned?
The University of Florida’s Barbara J. Evans, a professor of law and engineering, outlined the various reasons why companies would prohibit workplace AI use.
She told FluidGeek that, as of right now, “generative AI software tools have the potential to provide low-quality or false information.” “Selling incorrect information can result in lawsuits and reputational harm for consultants, law firms, and other businesses that provide information services to their customers.”
The privacy and security of private and confidential company information, according to Evans, is a major concern. Employees may divulge trade secrets or private information about consumers when asking generative AI tools queries, she warned.
You might discover, Evans continued, “when you read the privacy policies for these tools that by using the tool, you are agreeing that the tool developer may use whatever you reveal to them to further their model or for other uses.”
She asserted that businesses might prohibit AI as an issue of employee relations. People are worried about being replaced by AI, thus preventing its use in the workplace might enhance employee morale and convey the message that robots won’t be taking their jobs any time soon, according to Evans.
Safeguarding AI
Out of a great deal of worry, businesses are banning AI in the workplace, but Jennifer Huddleston, a technology policy research scholar at the Cato Institute, a Washington, D.C. think tank, insisted that in addition to concerns, the technology’s advantages should also be taken into account.
According to her, “new technologies like AI can help employees increase their productivity and efficiency, but in at least some cases, humans are still needed to verify the accuracy of their results or outputs.”
Instead of outright prohibiting the use of a technology, Huddleston advised, “organizations may want to think about other ways that they can address their specific concerns while still enabling employees to use the technology for positive purposes.”
Evans suggested that eventually humans could need to use AI to assist them control AI. She warned that eventually we humans might not be quick enough or intelligent enough to spot the AI’s mistakes. “Perhaps the future lies in developing AI tools that can quickly verify the outputs from other AI tools — an AI peer-review system that enables AI tools to peer-review each other,” says the author.
But would we have faith that something is true if ten generative AI tools all concur on it? she questioned. What if everyone there is delusional?