When it comes to smart buildings, one of the latest and most cutting-edge technologies available is artificial intelligence, or AI, sometimes also referred to as machine learning.

According to Siemens Building Technology, “Artificial intelligence is the ability for computers to learn things that they were not explicitly taught. Of course, by ‘taught,’ we mean programmed. AI is the result of machine learning. Artificial intelligence and machine learning are related but not synonymous.”

Some examples of how smart buildings can use AI include more efficient energy management, systems optimization, better experiences overall for the building occupants, and, of course, security applications. In a recent article on AI in SDM Magazine, Senior Editor Rodney Bosch wrote, “With use cases that cut across the security ecosystem, AI is providing the ability to capture incoming data from edge devices and leverage that information to uncover newfound operational efficiencies, solidify standard operating procedures for security departments, and deliver greatly increased risk management awareness to organizations of all sizes.”

But as is often the case, AI comes with its own set of unique challenges and concerns. 

To help reduce and mitigate possible risks related to AI, NIST (the National Institute of Standards and Technology) has created a new framework to help organizations build trust in AI. 

According to the executive summary, “Artificial intelligence (AI) technologies have significant potential to transform society and people’s lives — from commerce and health to transportation and cybersecurity to the environment and our planet. AI technologies can drive inclusive economic growth and support scientific advancements that improve the conditions of our world. AI technologies, however, also pose risks that can negatively impact individuals, groups, organizations, communities, society, the environment, and the planet. Like risks for other types of technology, AI risks can emerge in a variety of ways and can be characterized as long- or short-term, high- or low-probability, systemic or localized, and high- or low-impact.” 

AI systems, for example, may be trained on data that can significantly or unexpectedly change over time, making AI systems and the contexts in which they are deployed complex and sometimes difficult to detect and respond to failures when they occur. “AI risk management can drive responsible uses and practices by prompting organizations and their internal teams … to think more critically about context and potential or unexpected negative and positive impacts. Understanding and managing the risks of AI systems will help to enhance trustworthiness, and in turn, cultivate public trust,” the executive summary states.

The Artificial Intelligence Risk Management Framework (AI RMF 1.0) is designed to offer organizations a resource for designing, developing, deploying or using AI systems. The AI framework comes at a direction from Congress for NIST and was produced in close collaboration with both private and public sectors. It is intended to be a “living document” that can adapt to the AI landscape as technologies continue to develop, and to be used by organizations in varying degrees and capacities so that society can benefit from AI technologies while also being protected from its potential harms.

The AI RMF is divided in two parts: the first part covers how organizations should frame the risks associated with AI and outlines the characteristics of trustworthy AI systems; the second part outlines four different functions — govern, map, measure and manage — to aid organizations in addressing the potential risks involved with using AI in practice.

Read more about the NIST framework here.