The Frame Problem
Contents
The Frame Problem#
In brief#
The frame problem is the challenge of knowing and modelling the relevant features and context of situations, and getting an agent to act on those without considering all the irrelevant facts as well.
More in Detail#
The frame problem originated with McCarthy and Hayes[1] back in the sixties, but it has been appropriated many times over. The frame problem, as described by McCarthy and Hayes, was mostly about representationalism. They wondered how we could describe an update function such that it does not require a multitude of unnecessary and unaffected statements. It was and is a poignant question. We could have an agent that is able of acting on certain parts of the world, say paint pieces of paper [2], but when we give it other actions, these actions may interact. How does the machine know that the moving said paper won’t also change its colour? In representationalism, it seemed to mean that we had to add a variety of statements that only worked in serious edge-cases.
The frame problem as philosophers appropriated it, was about generalized action. How does an agent keep a faithful representation of the world, after it has acted [3]? Such an agent would need to have a kind of update function that does not require going over all the superfluous statements [4]. Fodor [5] posited it as Hamlet’s problem: How does an agent know when to stop thinking?
The frame problem these days is sometimes regarded as the general relevance problem, not being limited to representationalism but going into connectionism as well [6]. In this case, the inheritance of the frame problem for connectionism entails the follow: how does an agent know which data is considered to be relevant to the situation? The problem for connectionists approaches is not that input needs to exert influence on the system, but rather that it produces the correct influence.
This is where one can see problems for accountability on the horizon. In situations where agents have to act, we require that they indeed make the correct inferences given a certain context, have the correct update function, and produce the correct influence. All of these require that the agent understands the context at hand and is able to make the correct inferences such that the relevant action is achieved.
However, that is easier said than done. The task of the agent’s designer is finding a way that all these relevant inferences can be incorporated into the agent. If they don’t, then it introduces a gap. The model of the agent, their capacity for action, and the world will result in an agent that acts while missing (relevant) inferences. This can end up harming people or misaligning with human intention. The obvious examples are those of harmful classifications - Google for example had a problem with the classification of Gorilla’s[7].
The question of the frame problem for accountability is not how to solve the frame problem, as that seems to require solving a variety of tractability question and understanding the relation between agent and the world such that relevancy can be aptly captured. Rather, the designer should be aware of its own limitation and the limitations of the model that is housed within the agent. Meaning that accountability has a social aspect of explaining the necessary limits of the system or boxing possible actions of the agents such that these limits are acceptable in their scope.
Bibliography#
- 1
John McCarthy and Patrick J Hayes. Some philosophical problems from the standpoint of artificial intelligence. In Readings in artificial intelligence, pages 431–450. Elsevier, 1981.
- 2
Murray Shanahan. The Frame Problem. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, Spring 2016 edition, 2016.
- 3
Daniel C Dennett. Cognitive wheels: the frame problem of ai. Minds, machines and evolution, pages 129–151, 1984.
- 4
Drew McDermott. Ai, logic, and the frame problem. In The frame problem in artificial intelligence, pages 105–118. Elsevier, 1987.
- 5
Jerry A Fodor. Modules, frames, fridgeons, sleeping dogs, and the music of the spheres. In Jay L Garfield, editor, Modularity in Knowledge Representation and Natural-Language Understanding, chapter 1. The MIT Press, Cambridge, 1987.
- 6
Richard Samuels. Classical computationalism and the many problems of cognitive relevance. Studies in History and Philosophy of Science Part A, 41(3):280–293, 2010.
- 7
Ludovic Righetti, Raj Madhavan, and Raja Chatila. Unintended consequences of biased robotic and artificial intelligence systems [ethical, legal, and societal issues]. IEEE Robotics & Automation Magazine, 26(3):11–13, 2019.
This entry was written by Sietze Kuilman and Luciano C Siebert.