Sunday, May 10, 2020

Designing Artificial Intelligence (Human-Macine Interaction)

Taken from MIT Sloan Management Review's article Designing AI Systems With Human-Machine Teams March 18, 2020

The greatest potential from artificial intelligence will come from tapping into the opportunities for mutual learning between people and machines.


Artificial intelligence (AI) promises to augment human capabilities and reshape companies, yet many organizations try to implement AI without having a clear understanding of how the technology will interface with people.

Assessing the Context of AI Application

Bringing together the formal rationality of AI and the substantive rationality of humans can help companies meet their goals and optimize the chances of success. However, managers need to assess the decision-making context on two dimensions: (1) the openness of the decision-making process and (2) the level of risk. These will help figure out the teaming options for implementing their AI systems and maximizing further learning.
Openness of the decision-making process. A closed decision-making process implies that all the relevant variables have been considered and that there are predefined rules for framing decisions. An open process, in contrast, anticipates that there will be problems that aren’t well defined and that some variables may not be known in advance.
Closed and open decision-making require different approaches with regard to AI. Closed applications have well-established, structured performance indicators and work with a set of fixed variables. Open system decisions require additional information, often from multiple sources.
Assessments as to whether the process should be open or closed may vary. Consider the challenges involved with language translation that are based on preset rules of grammar and meaning,  are therefore closed. In undefined situations, the process might be assessed as open. AI systems such as natural language processing will access contextual information and learn how certain experts handle specific situations.
Level of risk. The severity of a risk depends on the specific elements. An acute risk might be tolerated if the chance of the event occurring is small. Conversely, if the chance is high, the risk may be unacceptable — even if the specific danger is small.
Knowing the risk level can help you decide whether you’ll be comfortable making decisions entirely based on algorithms or whether you’ll want additional resources like human experts on hand to help you handle unexpected situations.

What Role Should People Play?

Combinations of human awareness and AI system design can take different forms, making different configurations possible.
When the contextual factors are well defined, algorithms can “learn” by interacting with the environment through supervised machine learning. In these instances, the need for human involvement is low and act not as active decision makers but as foremen.


By combining humans and machines in AI systems, organizations can draw on four main teaming capabilities:
Interoperability. The interaction needs to be facilitated, systems should be able to share the right piece of information and analysis whenever it’s required. An AI system should also be able to specify the precise role that a human needs to play in the interaction.
Authority balance. In examining dealings, it’s essential to know which one has the final control and when. In low-risk situations, the ability to control for the outcome might be enough. But in high-risk situations, the process might require a more immediate response. The system could also decide to revise how authority is assigned in order to prevent actions that could endanger people or assets.
Transparency. Given the need for reinforcement loops, transparent decision-making processes are key to building trust. The human needs to know which variables, rules, and performance parameters the algorithm uses. At the same time, the machine should know which decisions the human is authorized to make in order to integrate them into the learning loops.
Mutual learning. Machines learn from various sources, including the external environment, repetitive patterns, and the expected versus actual outcomes of decisions. However, they can also develop insights from human experience and intuition. This learning takes two forms: when humans make decisions that the machine analyzes and when human experts train the machines with their intuition. Just as machines learn from humans, humans can acquire insights from algorithms. These two-way learning loops increase the overall scope and performance of the AI system.

Configurations of Teaming Capabilities

Four different ways humans and machines can work together to make decisions.


Machine-based AI systems. In settings where machine-based designs are central and no surprises are expected, machines can perform tasks independently, with humans playing only supervisory roles and making changes only when necessary. Since potential mistakes are visible and do not pose major risks, the interoperability is for audit purposes only, and transparency is not required.
Sequential machine-human AI systems. In other settings, machines are capable of performing many of their required tasks independently. But humans need to do more than monitor the outcomes — they need to be prepared to step in to deal with unplanned contingencies. This requires humans to have situational awareness and to be ready to identify events that extend beyond the capacity of the machine and intervene. To know when such interventions are required, the AI system needs to have a level of transparency.
Cyclic machine-human AI systems. In settings where the processes are open and low-risk, organizations have wide latitude for shifting decision-making authority from machine to human and vice versa. Even though a high degree of transparency may be needed, as long as the AI system is operating smoothly, the human agents’ task is to monitor the outcomes without intervening in the activity. Their role is that of a coach: to train the AI system by providing new parameters and generally improving the performance.
Human-based AI systems. Decision processes that are both open and high-risk call for human-based AI systems, with the final authority in the hands of humans. Although the AI systems may have enough stored and processed data to make educated guesses, the risk of something bad happening can’t be overlooked. Therefore, experts must maintain high situational awareness. It’s critical, moreover, that the various decision rationales be sufficiently clear and transparent to advance the learning of both humans and machines.

Successful AI implementations should draw on a variety of configurations that can be adapted to the scenario at hand, depending on the environment and human factors.

No comments: