left-arrowright-arrow
Keeping Pace with the AI Revolution
Considerations for OHS Professionals
BY JAY A. VIETAS
Working from Home but Missing Your Synergist? Update Your Address
If you’ve been working from home during the pandemic, please consider updating your address with AIHA. You can change your address by editing your profile through AIHA.org. To ensure uninterrupted delivery of The Synergist, designate your home address as “preferred” on your profile. Update your address now.
Editor’s note: The mention of specific companies, products, or services in this article does not constitute endorsement by AIHA or The Synergist.
If you are waiting for the artificial intelligence (AI) revolution to begin, you might be surprised to hear it is already underway. AI and autonomous systems are already part of everyday life, and the pace of change is only expected to increase. According to Stanford’s 2021 AI Index Report, 101,000 AI-related patents were issued in 2019, up from 78,000 in 2018. In 2020, AI startup companies received more than $40 billion in private investment across the globe, including almost $24 billion in the United States, an increase of 9.3 percent since 2019. Furthermore, AI is becoming more affordable due to increases in computer speed and storage space. For example, the cost per entrant for image recognition, which refers to a computer’s ability to identify and categorize an image, decreased from $1,100 in 2015 to $7.43 in 2020. AI can improve the efficiency of tasks performed by programming and recognition tools, many of which are integrated seamlessly into everyday life. Can’t decide what movie or show to watch? AI technology can suggest one for you. Don’t have the time to vacuum the living room? An AI-controlled robotic vacuum can take care of that. The same technology displays advertisements catering to your interests as you use social media, browse the web, or use smartphone apps. While some people welcome and appreciate these applications of technology, others might find targeted advertising and the intrusion of advanced technologies into their lives unnerving, unwelcome, and discomforting. So, why should occupational health and safety (OHS) professionals be aware of this technology? NIOSH and other safety and health organizations recognize that AI is increasingly becoming part of many operations in the workplace. Accordingly, OHS research is beginning to focus on how AI may influence work practices and impact the safety, health, and well-being of workers. AI AND THE WORKPLACE Some companies are already using AI to assist with personnel screening and hiring processes; optimize energy management through the control of heating, ventilation, and air conditioning systems; and determine qualified candidates for loan applications. Companies such as Lyft and Uber use AI to dictate rideshare driver allocations and routes, while AI cloud company DataRobot provides services that can predict when equipment should receive preventative maintenance. Solutions offered by Intenseye, an AI-powered environmental health and safety company, can monitor the location and activities of employees to identify those who may be working alone, working in remote areas, or performing unsafe activities. A software application called AUDREY, in development by NASA researchers, is intended to collect data and use machine learning technology to improve firefighters’ situational awareness and guide them to respond to fires more effectively and safely. In a June 2021 EHS Today article, Ryan Quiring, the CEO of workplace safety management software provider SafetyTek, discusses how AI can be used to optimize the management of corporate safety programs. (More information about these uses of AI can be found in the resources listed below.) AI technology can make processes more efficient, organize work patterns more effectively, and identify hazards. For employees with the right training, AI technology can intervene to make the workplace safer. AI can even identify patterns that enable OHS professionals to focus on specific worker practices that may be more likely to result in injury or illness. There are many reasons for OHS professionals to be optimistic about AI. However, since AI systems will impact workers and work processes either directly or indirectly, it is important to acknowledge that there is always risk that any system may not operate as intended. When relying on AI, failure may originate from the data fed into the system or the system programming itself. Data errors, data corruption, and deficiencies or gaps in data—whether due to negligence or malice—may lead to serious unanticipated errors; indeed, attacking algorithms by feeding faulty data into them is a topic of interest in cybersecurity. If faulty data are not the issue, it is possible that the programming of a system may result in unexpected outcomes. To ensure that AI systems or processes perform as intended, it is necessary to maintain code and hardware, train algorithms to ensure they generate good results, and perform regular testing.
AI tools might function as designed, but their design may not adequately consider the people or particular groups of people working with the technology.
The potential unintended consequences resulting from faulty system programming depend on the process in which an AI system is employed and may impact physical, chemical, biological, or even psychological exposures. Although not all AI-related systems are able to cause these types of exposures or harm, all systems will impact workers in some form or fashion.
AI tools might function as designed, but their design may not adequately consider the people or particular groups of people working with the technology. For instance, data or programming may not appropriately represent gender, age, race, or cultural groups (the well-documented bias of face-recognition algorithms toward lighter-skinned individuals is one example of this issue). Additionally, workers’ skills or training may not match a system’s requirements, or assumptions regarding workers’ responses made while designing a system may be incorrect. Alarms that interrupt normal processes as well as system behaviors that are frustrating to deal with or limit workers’ ability to take mental or physical breaks increase the risk that workers will respond in ways the system designers didn’t anticipate. This is especially true for systems that monitor employees when there is an inequity in application.
Concerns related to security, privacy, confidentiality, and data management policies and procedures may make workers feel uncomfortable, as well as create additional cybersecurity-related risks. Furthermore, some workers may perceive AI systems in general as threatening to their current or future employment opportunities.
HOW DO OHS PROFESSIONALS FIT IN WITH AI? NIOSH and other organizations in the OHS community are in the initial stages of answering questions about OHS and AI, such as: • What is the role of OHS professionals with respect to the operations and impact of AI systems on workers? • How should OHS professionals evaluate and, when necessary, control AI systems? • What are the best practices and tools that OHS professionals should use when dealing with AI? • What are the skills necessary to prepare future OHS professionals to best address concerns and exploit opportunities associated with AI systems?
A good place for OHS professionals to start is by applying the broad principles of process assessment: hazard identification, evaluation, and control. Look for AI systems involved with work in your organization and determine their likelihood of causing harm. Determine the people who may be affected and the work processes supported by these systems. One approach may be to evaluate AI systems that are involved with critical or potentially dangerous processes. Another approach is to begin by evaluating systems that interface with the most workers most frequently. The third and perhaps most effective approach takes into account the types of AI-supported processes within your workplace and the number of personnel affected by them.
For each AI system you evaluate, attempt to understand how workers interface with the system. This can be accomplished by speaking with workers who use the system and observing them in action and by having discussions with programmers or data scientists associated with the system’s implementation. You may need to include all individuals involved with the AI. Determine the demands that the system places on workers and whether these are reasonable or likely to be hazardous. Consider the following questions: • Can workers reasonably be expected to safely perform their tasks for the needed duration of time in the environment where the AI system operates? • Are workers being asked to respond to alarms that interrupt other work processes, potentially inducing stress and mental fatigue? • Does the system impede workers’ ability to maintain appropriate situational awareness? • Does the system enable workers to take breaks, including unscheduled bathroom breaks? • Does relying on the system to be always enabled and functioning properly create an unreasonable sense of security?
It may be worthwhile to review injury and illness logs to determine if health or safety outcomes have changed since AI systems have been implemented. A review of worker turnover rates might provide insight into the effects of AI systems on workers as well.
OHS professionals should establish and maintain positive relationships with the people who work with or are affected by AI systems.
Once you’ve identified potential hazards, the next step is to perform a risk assessment. While additional research is needed on the best methods or approaches to evaluate AI systems, traditional approaches are likely to remain useful. Risk assessment is a function of understanding the likelihood of unexpected outcomes and evaluating the consequences or severity of those outcomes. For some hazards, you may find it appropriate to work with data scientists or computer programmers to understand where there may be gaps in an AI system’s data, or you might review model training data to develop estimates for the likelihood of particular outcomes. Risk assessments should be used to determine the effects of AI use and AI failure. Consider using analytical tools such as failure modes and effects analysis (FMEA) or “what if” hazard analysis to identify where failures may cause situations that adversely impact workers’ health, safety, or well-being.
If controls are desired, OHS professionals should work with the teams tasked with overseeing the AI systems’ development and implementation. In particular, OHS professionals should approach solutions or interventions using the hierarchy of controls and work closely with data scientists or computer programmers to understand how data are used to “feed” or “train” algorithms. While it helps to understand how computer programs work, you do not need to be a computer scientist to address OHS issues associated with AI systems. Explain to the teams of programmers or data scientists where an AI system may need to be modified or adjusted to address situations or circumstances that cause worker safety and health concerns. Once the adjustments are performed and implemented, be sure to observe and evaluate the systems to ensure that the modifications produce the desired outcomes.
All AI systems should be controllable by humans and designed to address situations in which systems or processes may need to be disabled, overridden, recalibrated, or reprogrammed by personnel due to recognized faults or malfunctions.
ADDITIONAL CONSIDERATIONS OHS professionals should establish and maintain positive relationships with the people who work with or are affected by AI systems. Typically, OHS program success is a function of good relationships with workers, managers, and maintenance personnel. This will likely continue to be true. Implementing AI systems that maximize positive impacts on the workforce and minimize health and safety concerns will require dialogue with computer programmers, data scientists, and other information technology personnel. For some OHS programs, this will be easy to implement; for others, it will require educating a whole new group of professionals on the art and science of health and safety.
It’s also important for OHS professionals to recognize the potential for groups of workers to be impacted by the same AI system in different ways. Addressing concerns related to health and safety equity may require OHS professionals to develop novel methods of measuring and controlling risk in the workplace and will certainly require the need to coordinate between OHS operations and computer programmers.
As there is still much to learn about implementing effective workplace AI systems, creating an environment in which failure or near failure is treated as an opportunity to learn and improve will pay dividends in the long run. When appropriate, OHS professionals can use tools such as root cause analysis to understand why a system did not operate as intended. OHS professionals should adjust policies and procedures based on these events and celebrate these adjustments as opportunities for professional and system improvement. Likewise, OHS professionals should identify success stories and best practices to establish good policies and procedures while remaining open-minded in case new facts emerge that suggest alternative techniques.
We are living and working in an era of unprecedented change, buoyed by technologies that seem to anticipate a more promising and productive economy. AI and autonomous systems will play pivotal roles in defining the workplace of the future, in which there is hope for improved worker health, safety, and well-being. In the meantime, NIOSH and other organizations are committed to identifying and sharing best practices and seeking opportunities to understand and address OHS concerns associated with AI and autonomous systems. For more information on this subject, please see the NIOSH Science Blog posts on the topic of artificial intelligence.
JAY A. VIETAS, PhD, CIH, CSP, is chief of the Emerging Technologies Branch in the NIOSH Division of Science Integration.
Disclaimer: The findings and conclusions in this report are those of the author and do not necessarily represent the official position of the National Institute for Occupational Safety and Health, Centers for Disease Control and Prevention.
Send feedback to The Synergist.
gorodenkoff/Getty Images
RESOURCES
The Alan Turing Institute: “Understanding Bias in Facial Recognition Technologies: An Explainer” (PDF, 2020).
ASQ Quality Press: Failure Mode and Effect Analysis: FMEA From Theory to Execution, 2nd ed. (2003).
Cambridge University Press: Human Error (1990).
CareerCert: “How AI and Technology Are Revolutionizing Firefighting.”
DataRobot: “Predictive Maintenance.”
EHS Today: “Smarter Than You Think: AI’s Impact on Workplace Safety” (June 2021).
Intenseye.
Novatio Solutions: “Uber and Lyft Are Taking Artificial Intelligence Along for the Ride.”
Professional Safety Journal: “The Power of What If: Assessing and Understanding Risk” (2020).
Stanford Institute for Human-Centered Artificial Intelligence: “2021 AI Index Report.”