[written by guest author: Ken Knowlton (computer graphics pioneer and member of the Bell Labs Research team from 1964 to 1982) – see our recent post about Ken, Kenneth C. Knowlton’s “Terrible Thoughts” about AI Automation)]
By programmers’ brilliance, and machines’ speed and memory size, AI (Artificial Intelligence) is leaping ahead of human ability. Game playing programs now beat the best humans – in Chess, even in Go. They may (or may not) be the best strategists for matters financial, military, political and/or environmental. We need clearer thinking, and feeling, in this thicket. We may be putting the cart before the horse, dealing with numeric values of matters that are defined weakly, if at all; nevertheless we compute extravagantly.
AI is a set of methods for dealing with complex situations – for maximizing values that stand for the well-being of individuals, groups, and/or societies. We:
(1) model an environment-of-concern,
(2) predict futures of the system under various presumptions, and
(3) choose (or have chosen for us) the best course of action.
We presume that resulting predictions – of accidents, health, longevity, incomes, possessions, and similar quantifiable matters – lead to good choices and actions.
The problem is this: without AI, we normally choose, not by how we think about the future, but how we implicitly expect to feel about it. Where to live, what trade or profession to prepare for? What religious or philosophical belief system to follow? We imagine various possibilities and decide what seems/feels best.
From birth, and ever after, we experience pain, pleasure, hunger, love, tiredness, uneasiness, etc. We care. After a while, exactly those words and many others, express states of being, along with numerous modifiers for nuances. How could automatic processes deal in such terms? They are not conscious, not empathetic. They do not experience what we experience. Consider, for examples, states that I might feel myself to be in, or others see me as, in response to a perplexing situation:
abnormal, absent, absurd, accessible, accomplished, accountable, accurate, accused, active, adequate, admired, adorable, affected, afflicted, afraid, aggressive, agreeable, alarmed, alive, alone, amazed, ambiguous, amused, amusing, anchored, angry, annoyed, annoying, anonymous, antipathetic, anxious, apologetic, appreciated, appropriate, approved, arbitrary, argumentative, artificial, artistic, ashamed, assaulted, assured, astonished, attentive, authentic, authoritative, authoritarian, autonomous, average, awake, aware, awkward. (Only “a’s” here; would you like the b’s, c’s … ?)
Words like these describe my experience – who and what I am (or am seemed to be); this is how and where I live. These issues are the bases of my “intelligent” (at least human) response. AI systems cannot be, or experience, such states. At best, AI presents one or more futures in sufficiently rich terms that I might imagine how they would feel. ( I should not ignore them: AI’s predictions may be more realistic in quantitative terms, and intertwined complexities better handled than I might manage.)
(AI predictions will, of course, be available to groups, regions, countries, etc., but the general assertion remains, except that it’s difficult to say how a group or country might “feel” about – i.e. react to the thought of – something.)
Situations vary. Sometimes there’s no time for human contemplation – when AI systems become essential for immediate analysis and automatic response. One example: it “thinks” that incoming missiles are detected from a potentially hostile region. What should we have arranged for our AI systems to do, automatically and instantly?
More generally: As soon as our AI system “thinks” that a nuclear war is immanent, shouldn’t our AI systems act first, preemptively and decisively? Or with more look-ahead, imagine this: as soon as my system thinks that the opposing system thinks that my system thinks (etc.) … TAKE THE INITIATIVE!
How do we feel about unfeelingness of AI? Uneasy, with such unpredictable sorcerers’ apprentices!