By Dr Jason M. Pittman, Sc.D.
April 6, 2018
Previously, we discussed synthetic intelligence as a serious field of inquiry and, separately, what constitutes intelligence. Both are foundational topics with application in growing avenues of scholarship such as computer science, computational neuroscience, cybersecurity, and of course artificial intelligence. Later, we’ll use our foundation to begin constructing an understanding of not only how a synthetic intelligence can emerge but also how we can interact with such intelligence in a safe and trusted manner. However, the foundation is not yet complete.
Intelligence alone, synthetic or otherwise, does not provide the means to act. In part, such a claim summarizes a core issue with artificial intelligence related research. That is, the expression of intelligence is imitated without a sense of how the intelligence is perceiving itself. Indeed, without agency, I would suggest that intelligence lacks the facilities alone to interact with reality. In some ways, that tells us what agency is- an intent to interact with the world as an expression of intelligence. Agency, however, exists in an odd, paradoxical space defined almost exclusively by assumptions.
Perceiving agency is not without issue. That is, we act as if our actions are the result of intentions. As well, we intimate to ourselves (and others!) that such intentions originate internal to our intelligence and freely so without predetermination. Likewise, we are aware of the intention to act although we may not be fully aware of when such intention emerges within our consciousness. Let’s assume this to be universally true, particularly the awareness clause.
We can examine three questions now. Did you intend to read this sentence? As well, did you intend to do so exactly at the moment that you did? Lastly, when did you become aware that you had an intention to do what took place?
Such questions cut to the heart of the agency paradox and the potential for engaging with a synthetic intelligence. Consider for a moment that your answers to these questions are, “yes,” “yes,” and “when I read the sentence.” Perfectly reasonable responses that I suspect accurately portray how most of us would respond. In fact, we perceive agency in others as a signal that their behavior is intelligent. There seems to be an innate assumption that your behavior is intelligent because I perceive my behavior to be intelligent. Thus, I assume you have agency because I assume my intent to act has agency.
Here’s the problem: agency can be illusory. In other words, what we perceive to be agency is in fact not agency at all. The illusion of agency in an external context has been well researched. Likewise, the illusion of internal agency has been demonstrated in simulation and practice. Thus, we need to think about how agency in a synthetic intelligence might be possible and how could we potentially detect or measure agency in a synthetic intelligence.
The most obvious instrument would be the renowned Turing test. However, I have concluded that the Turing is insufficient to properly detect intelligence and agency in synthetic intelligence. Check back in two weeks for my explanation as to why we won’t be able to trust the Turing test!
https://www.captechu.edu/
April 6, 2018
Previously, we discussed synthetic intelligence as a serious field of inquiry and, separately, what constitutes intelligence. Both are foundational topics with application in growing avenues of scholarship such as computer science, computational neuroscience, cybersecurity, and of course artificial intelligence. Later, we’ll use our foundation to begin constructing an understanding of not only how a synthetic intelligence can emerge but also how we can interact with such intelligence in a safe and trusted manner. However, the foundation is not yet complete.
Intelligence alone, synthetic or otherwise, does not provide the means to act. In part, such a claim summarizes a core issue with artificial intelligence related research. That is, the expression of intelligence is imitated without a sense of how the intelligence is perceiving itself. Indeed, without agency, I would suggest that intelligence lacks the facilities alone to interact with reality. In some ways, that tells us what agency is- an intent to interact with the world as an expression of intelligence. Agency, however, exists in an odd, paradoxical space defined almost exclusively by assumptions.
Perceiving agency is not without issue. That is, we act as if our actions are the result of intentions. As well, we intimate to ourselves (and others!) that such intentions originate internal to our intelligence and freely so without predetermination. Likewise, we are aware of the intention to act although we may not be fully aware of when such intention emerges within our consciousness. Let’s assume this to be universally true, particularly the awareness clause.
We can examine three questions now. Did you intend to read this sentence? As well, did you intend to do so exactly at the moment that you did? Lastly, when did you become aware that you had an intention to do what took place?
Such questions cut to the heart of the agency paradox and the potential for engaging with a synthetic intelligence. Consider for a moment that your answers to these questions are, “yes,” “yes,” and “when I read the sentence.” Perfectly reasonable responses that I suspect accurately portray how most of us would respond. In fact, we perceive agency in others as a signal that their behavior is intelligent. There seems to be an innate assumption that your behavior is intelligent because I perceive my behavior to be intelligent. Thus, I assume you have agency because I assume my intent to act has agency.
Here’s the problem: agency can be illusory. In other words, what we perceive to be agency is in fact not agency at all. The illusion of agency in an external context has been well researched. Likewise, the illusion of internal agency has been demonstrated in simulation and practice. Thus, we need to think about how agency in a synthetic intelligence might be possible and how could we potentially detect or measure agency in a synthetic intelligence.
The most obvious instrument would be the renowned Turing test. However, I have concluded that the Turing is insufficient to properly detect intelligence and agency in synthetic intelligence. Check back in two weeks for my explanation as to why we won’t be able to trust the Turing test!
https://www.captechu.edu/
No comments:
Post a Comment