Invasion of the Job Snatchers
Scary headlines might make it appear that artificial intelligence is taking over the world. But some businesses are already experiencing downside with AI.
They’re coming. Every day, we’re told they’re coming.
Robots, chatbots, neural processing units, computer vision, hyper-powered creative generators — all manner of artificial intelligence technologies are coming for jobs. Maybe yours.
Ford Motor Co.’s top boss Jim Farley recently rocked the business world by saying AI will eliminate half of the nation’s white-collar positions.
Micha Kaufman, Chief Executive Officer of the freelance marketplace Fiver, surely rattled his employees with a memo that later went public: “AI is coming for your jobs. Heck, it's coming for my job too. This is a wake-up call. It does not matter if you are a programmer, designer, product manager, data scientist, lawyer, customer support rep, salesperson, or a finance person — AI is coming for you.”
It’s a spooky, new world of employment, spinning at the warp speed of artificial intelligence.
But People of Earth, now is not the time to panic. No, now is the time to learn about AI to better understand how its tools might enhance — or derail — your career.
And here’s one dirty, little secret you should know: AI, as some businesses are already learning, is not perfect.
AI Can Equal TMI
At the University of Iowa, a Tippie College of Business researcher says artificial intelligence can be an effective learning tool for workers. It can broaden their knowledge.
On the downside, however, her research showed AI can easily become a “firehose” of information that overwhelms workers and even reduces their performance.
Researcher Yiduo Shao says the amount of information the AI algorithm provides is often much more than a worker can process. So much so that workers stress out while trying to find just the information they need.
Shao’s research included data from surveys and diaries from more than 100 customer service representatives at a South Korean bank call center. Customer service representatives answer a range of customers’ questions. These queries can be as simple as how to set up a savings account, to complex topics such as estates, said Shao, assistant professor of management and entrepreneurship.
These bank agents have an AI robot on hand to help find information about questions the humans can’t answer, which was used on about 15 percent of the calls per day on average across all employees.
The bank robot helped agents learn much more about their jobs, including their knowledge of the products and services.
But, the robot unloaded so much irrelevant information that workers felt overwhelmed searching through the response to find what they needed to answer the customer’s question, Shao said. She suggested AI needs to be refined so workers are given only the information relevant to them.
Another study, reported by Reuters news service, found that using the latest artificial intelligence tools slowed experienced software developers. The nonprofit METR conducted the study on a group of software developers earlier this year while they used Cursor, a popular AI coding assistant.
Tech Can’t Always Be Trusted
AI can and does make mistakes — and worse, sometimes flat-out fabricate facts.
The New York Times noted that today’s AI bots are based on complex mathematical systems that learn by analyzing vast amounts of digital data. “They do not — and cannot — decide what is true and what is false. Sometimes, they just make stuff up, a phenomenon some AI researchers call hallucinations. On one test, the hallucination rates of newer AI systems were as high as 79 percent.”
This issue came to light recently in a court case involving a celebrity of sorts.
Remember Mike Lindell, the chief executive of MyPillow and a big supporter of Donald Trump? Lindell had accused an executive of Dominion Voting Systems of helping to rig the 2020 presidential vote, won by Joe Biden.
Eric Coomer, the former Dominion executive, filed a defamation lawsuit against Lindell. After a two-week trial, a jury awarded $2.3 million in damages to Coomer.
“Fake Cases”
The AI angle: At one point in the trial, lawyers for Lindell were sanctioned by a judge for misuse of AI in their legal filings. Specifically, Bloomberg reported, the lawyers had filed a brief “containing numerous errors including fake cases, misquotes, and misrepresentations of legal principles.”
The Times reports that this phenomenon has raised concerns about the reliability of these mathematical systems.
AI hallucinations “may not be a big problem for many people, but it is a serious issue for anyone using the technology with court documents, medical information or sensitive business data,” the Times says.
Still, for the most part American businesses are all in with AI, and the parade of bots seems unending. The march continues even though these deadbeat bots (personal rant here) pay no income or property taxes, pledge no support to the United Way campaign, or even pick up the check at lunch once in a while.
It’s all enough to give a worker the willies. Fast Company magazine presented this eerie, Orwellian scenario: “AI is coming for your job directly. Not with fanfare or grand announcements, but through silent, pervasive creep: software agents booking meetings, writing reports, sending personalized emails, making decisions.
“There are even tools to send your digital clone to videoconference meetings, without people even noticing it’s not the real you — yes, an AI deepfake of your professional self capable of intervening exactly as you would, if not more clearly. Soon, fully autonomous agents will do entire workflows without human hand-holding.”
There. You may want to panic now.
This column is part of a fast-growing collection of reporting and writing from Iowa writers all over the map. Check out their work at Iowa Writers Collaborative.
Biz Whispers is an independent, reader-supported column. You can receive new posts and support my work by becoming a free subscriber. Or, kindly consider becoming a paid subscriber if you’d like to help keep the lights on!
I’ve been wondering if AI will take my job. Just for kicks, I just asked Chat GPT for the five best restaurants in DSM: Here they are, according to Chat:
1. Aposto at the Scala House
2. Proudfoot & Bird
3. Flying Mango
4. Bubba
5. Eatery A
Aposto often makes “best” lists, so that one doesn’t surprise. I doubt anyone would be thoroughly disappointed at any other of these restaurants—but to say they’re our “best” is rather far-fetched. The bottom three are crowdpleasing in an algorithmic way, which is kind of what AI does in these situations, isn’t it? But Proudfoot & Bird? I have no idea how that one got there. It’s not bad, but I know of very few people who put it on their top five lists.
I also have to wonder how soon it will be before AI starts prioritizing “pay for play” the way Google does. The way influencers do. That’s when things will get scary. You start trusting it for its “objectivity,” then you find out it’s $$-driven?
Hopefully, voices that can be trusted, will still be read….and trusted…