ROB AGNEW, MS, CIH, CSP, REM, is an assistant professor in the Fire Protection and Safety Engineering Technology Program at Oklahoma State University in Stillwater, Oklahoma. MARK KATCHEN, CIH, FAIHA, is the managing principal of The Phylmar Group Inc. in Los Angeles, California. ALAN LEIBOWITZ, CIH, CSP, FAIHA, is the president of EHS Systems Solutions LLC, chair of the Joint Industrial Hygiene Ethics Education Committee, past chair of the Board for Global EHS Credentialing, and a past Board member of AIHA.
Send feedback to The Synergist.
AI in Academia: How Will It Change OEHS Practice?
Editor’s note: The opinions expressed in this article are those of the authors and do not necessarily reflect the opinions of AIHA, The Synergist, the JIHEEC, or its members.
Please share your thoughts on this article by emailing the editors or submitting them through the form below. Responses may be printed in a future issue as space permits. Artificial intelligence (AI), or more properly generative AI, has become a hot topic on college campuses. These “large language models” are capable of sifting through enormous quantities of online data and providing summaries in an easily digestible format. What student, or professional for that matter, would not be interested in such a powerful tool? However, as Uncle Ben from the Spider-Man comic has told us, “With great power comes great responsibility.”
A favorite assignment of college professors is the term paper. Naturally, many students are drawn to the perceived ease of having a digital assistant author their paper for them. Many universities even offer courses that teach students how to write effective prompts, enabling AI systems to return more useful results. These “prompt engineering” courses range from six-week certificate courses to full-semester, credit-earning classes. Should we eschew this technology as “taking the easy way out” or embrace it as the latest in a series of technological aids to writing research papers?
Technology and Ethics Seasoned OEHS professionals certainly remember using a card catalog at the library to find a book or article. When digital indexing of articles became available, was this technology shunned as being the easy way out? How do we feel about the wide availability of full articles on scholarly search engines? For decades, many students and professionals have used spell checkers and grammar checkers. These systems have steadily grown in sophistication. Is generative AI just the next step in the steady march of technology?
Many people, students included, would suggest that AI is no different than card catalogues, digital indexes, scholarly search engines, and other helpful tools. Others believe that generative AI and large language models are different in that they do more than deliver data; they perform a synthesis. Does this difference give rise to an ethical dilemma? As with most ethical questions, context matters.
For context, some scenarios may be illustrative. Consider a busy OEHS professional asked to discuss at an upcoming meeting the potential impacts of PFAS to their business. PFAS concerns are a complex, rapidly developing subject. Would it be out of bounds for the OEHS professional to prepare for the presentation by using a large language model to summarize current literature on the health and environmental impacts of PFAS? What if a college professor gave similar instructions for an upcoming in-class discussion?
A more challenging scenario may be an OEHS consultant who prompts a generative AI tool to write a 250-word summary on the health effects of asbestos exposure to include in a report for a client. We can easily imagine a college professor giving similar instructions for a homework assignment. Does the use of AI for self-learning differ ethically from using AI to produce a work product?
Presently, many universities specify three options for controlling the use of generative AI in specific courses: 1. Do not allow the use of generative AI. 2. Allow students to request permission to use generative AI tools in some circumstances; attribution would be required. 3. Allow full use of generative AI tools with attribution.
Option 1 may be appropriate for an introductory course where the professor feels the students need to chew on the material and develop an organic understanding of the information. Option 2 may be the best choice for a graduate-level course where rapid assimilation of new data may be necessary to supplement the solid foundation developed during undergraduate studies. Some might see option 3 as an abdication of responsibility. But there may be another view, one that OEHS professionals might embrace.
Many people, students included, would suggest that AI is no different than card catalogues, digital indexes, scholarly search engines, and other helpful tools.
From Author to Editor? Option 3, the full use of generative AI, may require a shift in understanding of our role as authors. Presently, the author of a term paper or industrial hygiene survey report is thought to be the generator of the knowledge to be transferred. What if we instead embrace the role of editor?
Any professor with graduate students knows this role well. A student develops a manuscript, and the adviser edits it in preparation for submission to a journal. Is this process much different from a CIH who reviews and edits reports written by junior IHs before they are delivered to a client? Could generative AI be a “junior IH” on a team of IH professionals?
This notion, while intriguing, does raise some ethical questions. Is there a duty to disclose this non-living “author” to the client or for posterity?
We can learn some foundational information regarding ethics by examining guidance on the citation of AI published in April 2023 on the American Psychological Association blog. Since large language models do not produce a stable text that others can retrieve, the product of the search prompt is more akin to an interview. Therefore, according to the blog post, it’s not appropriate to cite the output as one would a book, article, or website. Nor does AI-generated output qualify as personal communication since no person is involved in transmitting it. APA therefore suggests citing the author of the algorithm—that is, the company that created the large language model. For example, text generated by ChatGPT would be credited to OpenAI.
While these guidelines specify that the creator of the algorithm should be cited as the author of AI-generated text, APA is explicit that AI can’t be named as an author on scholarly publications. Authors who publish in APA journals are required to explain any AI usage in their paper’s methods section and to upload the full text of AI output as supplementary material. APA also distinguishes between AI use and technologies such as spell checkers and grammar checkers, which do not need to be disclosed.
So far, the field of referencing and attribution has not developed separate guidance for situations where an individual edits the output of an AI tool. While the examples in the APA blog post concern material taken verbatim from the algorithm’s output and placed in quotation marks, the guidance would presumably also apply to summaries and paraphrases.
Legal Matters Once an author has decided to use AI in the generation of a report, some legal questions need to be considered. Since the algorithms pull from many sources, they create a derivative work. Does this have copyright implications? Some generative AI programs can be instructed to provide citations of material, but their maturity and reliability are still in question.
The next legal question is how will the OEHS professional address questions about the use of AI during depositions? Is editorial review of the AI output sufficient to withstand scrutiny? This may lead to future ethical guidance that some sections of a report may be appropriate for AI assistance while others are not. A summary of health effects of a given chemical or physical agent may be within the algorithm’s wheelhouse, whereas recommendations for exposure control strategies may not.
AI is no longer coming; it’s here. As students leave college and enter the workforce, they will bring AI with them. As with any new technology, new ethical challenges are emerging. OEHS professionals need to begin having discussions within their organizations about how to use AI as an efficient tool and how to provide guidance on its ethical use.
For Discussion Should OEHS professionals consider using AI tools as “junior IHs”?

SvetaZi/Getty Images
Thank you for your submission.
Share your thoughts on this article with
The Synergist.
By submitting this form, you agree that your name and response may be published in a future issue of The Synergist.
American Psychological Association: “APA Journals Policy on Generative AI: Additional Guidance” (November 2023).
American Psychological Association: “APA Publishing Policies” (August 2023).
APA Style Blog: “How to Cite ChatGPT” (April 2023).