Robot rights: 5 ways to manage the AI moral dilemma in the workplace

By Jasmine Henry

Illustrated outlines of peoples heads connecting with lightbulbs inside.
Bigstock

The chatbots are coming for enterprise productivity, but not everyone is sold on the morality of robot rights. The ethics of artificial intelligence (AI) remain a prominent debate within philosophical, technological, political and academic circles. As CIOs explore enterprise innovation through the use of new cognitive apps and services, understanding different perspectives on moral boundaries may be important.

Research on the state of AI in the enterprise reveals an active shift toward mainstream adoption. In 2017, 61 percent of enterprises surveyed by Narrative Science reported they had implemented AI, according to GlobeNewswire. It is predicted that by 2021, more than 50 percent of enterprises will spend more per annum on AI tech such as chatbots than on developing mobile apps in more traditional forms.

The state of robot rights in 2018

When Wired broke the news late last year that filings for an AI church were underway, the debate about robot rights returned; however, the debate never really went anywhere. An online open letter posted by the Future of Life Institute calling for research into AI safety has received 8,000 signatures, including the support of Elon Musk, Peter Norvig of Google, Steve Wozniak and the DeepMind co-founders.

Worldwide, 61 percent of individuals believe robots will have a net positive impact on society, according to Arm research. By some accounts they already are: TechCrunch reports that Facebook has unveiled a suicide prevention AI.

Productivity, the workplace and robot rights

While much of the publicized debate around robot rights focuses on worst-case scenarios, like autonomous killers, there are plenty of individuals who worry about its impact in the workplace. The World Economic Forum writes that productivity gains from automation may go too far, creating human unemployment.

CIOs are wise to consider cognitive intelligence from the perspective of disclosure, bias, transparency, data privacy and monitoring for best results.

1. Disclose the use of robots

“One of the keys to keeping AI ethical is for it to be transparent” Rob High, vice president and chief technology officer of IBM Watson, told Forbes. In most cases, your customers and employees should be made aware they’re talking with a chatbot service instead of a human when applicable.

2. Control for bias

Technology isn’t free of human bias. This can have negative consequences when a bot’s actions or recommendations are informed by bias-ridden algorithms or data sets. When it comes to the design and implementation of AI in the workplace, controlling for human bias is an important ethical consideration, especially if the intelligence will make impactful recommendations.

3. Provide accountability

IP lawyer Maya Medeiros recommends in Social Media Law Bulletin that organizations provide accountability when developing chatbots and give users with insight into how bots make recommendations. Not only should end users understand robotic reasoning, there’s likely value in “program-level accountability to explain why a chatbot reached a decision to act a certain way,” Medeiros said.

4. Consider data privacy

Any inputs and outputs from conversational interfaces in the workplace may include sensitive data and should be covered in privacy policies accordingly.

Depending on the scope of your chatbot, it may also be important to users to understand that data will remain encrypted — especially if your users are transmitting credit card numbers or health data. “Users need to know that the questions they ask and the interactions they have with your bots will remain private and secure,” writes Joe Amditis of CUNY in Medium.

5. Maintain human oversight

“Garbage in, garbage out” is a pillar of computer science, and it’s been demonstrated in highly publicized examples of chatbots gone rogue. Bots have been trained to generate inflammatory and offensive content, thanks to humans “trolling” public AI experiments with off-color inputs.

While your AI algorithms may be trained to constantly learn from new inputs, it’s important to ensure they’re not learning from human frustration, profanity or hatred. More importantly, avoid letting workplace AI operate without human oversight. Most proponents and opponents of robot rights would agree that little good can come from an unchecked chatbot.

Whether or not you agree that there’s a serious case for a kill switch, carefully monitoring your cognitive intelligence for quality after implementation is probably the best possible plan.

Written By

Jasmine Henry

Jasmine E. Henry, MS

Jasmine is a commentator on emerging technology and freelance writer in the greater Seattle area. With a professional background in analytics, big data, mobility, and security that spans both the for-profit and government sectors, her professional interests include artificial intelligence…

Other Articles by Jasmine Henry
See All Posts