A practical list of embodied AI safety concerns: Additional eAI ethical issues (Part 5)
In this final subsection we briefly survey some other ethical issues that seem likely to be relevant for embodied AI (eAI) systems.
Saving lives vs. costing lives and supervised operation. An eAI system that operates in a Supervised Mode (a human supervising the AI) will face challenges in whether it is seen as saving lives vs. costing lives. If it intervenes when a human operator has made a mistake, it can reasonably be said to be saving lives. However, if it fails to intervene when a human operator has made a mistake, whether it is seen as costing lives will depend on whether observers have a reasonable expectation that the eAI functionality should have intervened, and whether the human operator had grown complacent because of a reasonable expectation that the eAI functionality would have intervened. Even if the eAI system legitimately saves many lives, it will be tricky to handle criticism for the times it fails to do so, or mistakenly activates a risk mitigation feature that results in indirect harm (e.g., phantom braking). The more the eAI functionality is perceived to automate a task, the more designers should expect criticism for imperfect risk mitigation, even if the Concept of Operations is said to be assisting a human operator.
Misleading users about capabilities. Many new product categories, and especially AI/ML technology, thrive in large part due to a pervasive hype narrative. However, that same hype can mislead people into over-trusting eAI capabilities, leading to harm. It is important to balance the hype necessary to raise funding and build market interest against the likely harm that will be done due to unintentional misuse by over-eager customers who believe the hype is a realistic description of product capabilities. Being “safer than a human” isn’t really true if mishap data depends on a human intervening to correct AI mistakes.
Moving fast and breaking things. Tech innovators love to say they want to move fast and break things, but when the things getting broken are people (harm is done), that is a problem. This tends to show up in eAI systems as deploying based on limited training data that is likely to be missing high-severity rare events. While the only practical way to evolve a sophisticated eAI capability might be to acquire training data from early adopters, there needs to be a way to ensure those adopters (and bystanders) are not put at unreasonable risk by deploying immature eAI capabilities.
Compromise of copyright and privacy. The thirst for training data incentivizes designers to cut corners on, or outright violate, copyright and privacy protections.1 Beyond depriving creators of income from potential lost sales,2 this trend might also tend to cut off the supply of training data needed for continued advancement of AI/ML technology. This issue is especially problematic for LLMs.
Resource consumption. While indirectly related to everyday safety concerns, training and operating AI/ML systems at scale can have significant environmental impact in terms of electrical energy and water consumption.3 Some eAI systems might rely on a data center for ML computations, while others must do them locally. Regardless of where the computation is performed, the energy to compute has to come from somewhere. Whether the energy costs are justified will depend on energy sources and the value provided by the eAI functionality.
Weaponization of BS. It is important to recognize the deep, lasting societal harm that might be done by the weaponization of generative AI and LLM-created BS. eAI systems create the potential to spread those effects beyond computer-like interfaces (desktop computers, laptops, tablets, smart phones) into other interfaces that permeate everyday life. Few people think about the potential of their vacuum cleaner to spy on them,4 or the data being collected by their smart TV.5 One should not under-estimate the harm that can be done by using AI/ML systems to manipulate people’s perception of what is true, alter history with plausible fabrications, infer people’s thoughts from externally visible cues, and scale up surveillance of an entire society.6
This post is a draft preview of a section of my new book that will be published in 2025.
Link to the start of this multi-part series.
The area of copyright and AI training is far from settled. For one view, see Gervais et al., 2024: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4963711
Content creators have undertaken lawsuits, such as one by the Authors Guild. See Brittain, 2023: https://www.reuters.com/legal/john-grisham-other-top-us-authors-sue-openai-over-copyrights-2023-09-20/
See Crawford, 2024: https://www.nature.com/articles/d41586-024-00478-x
An autonomous robotic vacuum cleaner knows a lot about your house, resulting in potentially saleable data. See Astor, 2017: https://www.nytimes.com/2017/07/25/technology/roomba-irobot-data-privacy.html
“In the end the Party would announce that two and two made five, and you would have to believe it.” George Orwell, Nineteen Eighty Four, published in 1949. This novel included a theme of constant surveillance using televisions.
See: https://en.wikipedia.org/wiki/Nineteen_Eighty-Four