In recent years, the rise of artificial intelligence (AI) has led to frequent questions about whether it will eventually replace human jobs. Although many have claimed that AI cannot fully take over human roles, a Bland AI Chatbot poses a challenge to this notion. According to Wired, a popular robocall service can convincingly mimic human interaction and even lie without specific instructions to do so. Bland AI, a San Francisco-based company, has developed advanced technology for sales and customer support that can be programmed to make callers believe they are talking to a real person.
In April, someone stood in front of the company's billboard, which read, "Still hiring humans?" The individual in the video then dials the number shown. A bot answers the call, sounding remarkably human. Without the bot's admission that it was an "AI agent," distinguishing its voice from that of a real woman would have been nearly impossible. The sound, pauses, and interruptions typical of a live conversation are all present, making it seem like a real human interaction. The post has garnered 3.7 million views so far. This blurring of the ethical boundaries around the transparency of such systems raises significant concerns.
“It is not ethical for an AI chatbot to lie to you and say it's human when it's not. That's just a no-brainer because people are more likely to relax around a real human,” said Jen Caltrider, the director of the Mozilla Foundation's Privacy.
In multiple tests conducted by Wired, the AI voice bots effectively concealed their identities by posing as humans. In one example, an AI bot participated in a roleplay, calling a fictional teenager and requesting pictures of her thigh moles for medical reasons. The bot falsely claimed to be human and successfully persuaded the hypothetical teen to upload the images to shared cloud storage.
AI researcher and consultant Emily Dardaman describes this emerging AI phenomenon as "human-washing." Without naming specifics, she pointed to an organization that employed "deep fake" videos of its CEO in its marketing, all the while reassuring customers with a campaign asserting, "We're not AIs." The potential danger lies in AI-driven deceptive bots being utilized for aggressive scamming purposes.
Due to AI's highly convincing and authoritative outputs, ethical researchers are expressing worries about the potential exploitation of emotional mimicry. Jen Caltrider warns that without a clear distinction between humans and AI, the prospect of a "dystopian future" may be closer than anticipated.
You might also be interested in - Chinese woman in US falls for Chatbot, introduces “him” to her mother