DEPARTMENTS
ETHICS
Editor’s note: The case study in this article is fictitious and is intended to highlight ethical issues in the practice of industrial hygiene. Any resemblance to real people or organizations is coincidental. The opinions expressed are those of the authors and do not necessarily reflect the opinions of AIHA, The Synergist, the Joint Industrial Hygiene Ethics Education Committee, or its members.
Thoughts on “AI in OEHS”
Alan Leibowitz’s article “AI in OEHS: Ethics of the Future” in the September Synergist identifies ethical concerns with the use of artificial intelligence by occupational and environmental health and safety professionals. These concerns include data privacy, the possibility of bias in AI outputs, and the need for accountability and transparency as well as human supervision when AI systems are incorporated into OEHS practice.
As explained in the article, AI tools powered by large language models (LLMs) can respond to natural-language questions in many subjects, but these responses cannot be accepted at face value. These systems do not inherently possess “knowledge” or “facts” in the way humans understand them. An LLM-based tool is optimized to provide the most likely response—not the most correct one—based on its training data.
CASE STUDY
A fictional case study presents the story of Tobor, a young industrial hygienist at a large environmental consulting firm. Assigned the tedious task of collecting historic corporate data for a new client, Tobor turns to AI to help prepare his report. He prompts the system to summarize the data with particular attention to applicable regulations and exposure hazards that require action. After a brief review, he submits the report to management, who base their recommendations to the client on it. But later, discussions with the client reveal that some of the standards identified in the report do not exist or were inaccurately applied to the issue being addressed.
The utility I see in AI is in doing what I am not trained to do but am knowledgeable enough to validate.
Questions for discussion. How could Tobor have better incorporated AI into his task? What obligations did he have to let the team know he was using AI to complete his assigned task? How could he have more effectively validated the data? Does AI have a place in OEHS practice?
A READER RESPONDS
Let me start by saying that I am a huge fan of technology and am highly confident in its ability to not only make work easier but to make the data I collect more digestible by all levels of an organization.
Having said that, I would not rely on AI to author a report or summarize a dataset. We should all be experienced and knowledgeable enough to do that. Although I haven't done so yet, I might paste my report into one of the AI tools to compare my writing style with what it can produce. If the AI product has sentences, phrases, or paragraphs that more succinctly illustrate my point, I would consider using those or editing what I have based on what the AI generated. In this scenario, I still control the message. The story I am telling will still be told by me.
The utility I see in AI is in doing what I am not trained to do but am knowledgeable enough to validate. For example, I could paste my dataset into AI to produce the Python code or Excel formulas needed to create a specific type of chart. Of course, shame on me if I don't make sure that the message in the chart is consistent with the message in the body of my report.
I am comfortable using AI to help me do my work. I am not comfortable using AI to do my work for me.
Great discussion—one that brings to light some potentially serious professional and ethical issues.
Rick Newman, CIH
RESOURCE
The Synergist: "AI in OEHS: Ethics of the Future" (September 2023).