Max, I will spare you the typing of your next logical move. It is this:
"Aw, shucks, that AI ethical stuff done been done with Asimov's Laws of Robotics."
Hardly.
Allow me to patiently explain, grasshopper. (I'm thinking of changing my profile photo from Nietzsche to Master Po, ha ha.)
The fatal problem for the Laws of Robotics is that they presuppose that any particular, one-off, ad hoc moral response can be subsumed beneath 3 (actually 4) universal rules. Like all deontological, universal rules, they fail on particular, real-life instances. But human morality is not deduced from godlike abstractions. Humans do not acquire their moral sense in that deductive way. They acquire it as I have suggested: By the accumulation of thousands of casuistic cases, which form a moral sense that is intuitive, incapable of abstraction into fixed general rules.
There is a good Wikipedia article (https://en.wikipedia.org/wiki/Three_Laws_of_Robotics) that discusses the deontological failures of the "Laws of Robotics." It cites the ridiculous struggles of many sci-fi writers to overcome the Four Commandments brought down from mount Asenion by Asimov-Moses. Just for completeness, I give them here:
• A robot may not injure a human being or, through inaction, allow a human being to come to harm.
• A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
• A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
• Asimov later added a Fourth Law, a "Zeroth Law," stating that a robot must not harm humanity (opening up a can of worms even greater than the foregoing).
Notice the beautiful, logically nested sequence of "laws" – just like pretty Matryoshka dolls, so satisfying, so seductive for any would-be thinker (and "would-be" truly captures most dimwit sci-fi writers). But even the dullest knife in the drawer can see that problems are inevitable from the robots' attempt to evaluate "harm."
Deontology always fails; especially for the Asimov's ballyhooed Laws of Robotics.
OH, I see. You're talking about the alignment problem--not the problem of consciousness. That's cool! I wouldn't mind writing something about that at some point.
Max, Max, you always reach for complexirification (damn, if only we were in German, I could make up such words with nary an eyebrow raised!).
My solution is very simple, having nothing to do with behaviorism or any other "-ism." I mean CASUISTRY, not of course in the sense of "captious reasoning," but in the sense of "moral set-pieces," if I can put it that way. (My "CAUISTICAL" was a misspelling – it needs an "S.")
My meaning was given originally by the Jesuits, when evaluating the effect on the soul when confronted with a particular moral choice. For example (I don't know why this comes to mind, but anyway...), in Camus' novel _La Chute_ (The Fall), Clamence, on the way home from a night of fornication, crosses a bridge and passes a woman there. A moment later, he hears a splash and a cry, but he does nothing. What would be the "proper" moral response to such an event? THAT is what I mean by a CASUISTIC event. (Another, more famous, example from the Jesuits is casuistic question: "How many angels can dance on the head of a pin?" which addresses the question of whether a purely spiritual being has dimension.)
Thus a CASUISTIC DATABASE would have very simple entries like the one I provided for an infant presented with a steaming spoonful of food, in which case the moral datum answers the question of how steam answers the value question of "Is this wholesome or harmful to my life?"
The problem for an AI robot is that it CANNOT have values, since "value" must always imply the context of a living being that is contingently alive, that can possibly die. Robots cannot die, thus cannot have values. Human values are accumulated over a lifetime of what I call "casuistic events," that is, encounters where that contingency is made evident – in some cases only mildly, as when a tongue is burned on hot food; in other cases when confronted with the question of whether to rescue a woman who has fallen into the water.
I suppose I have to solve yet another "unsolvable" problem... yawn!
So here is your solution to "consciousness in AI."
Here is how AI currently has its "informational smarts": Prompted by a human input, It simply anticipates the next syntactically and statistically correct phrase based on an enormous mass of reputedly factual data on the Internet. I say "reputedly." AI does not really have a Leftist bias (e.g., in providing "woke" images of the Founding Fathers, etc.) – it's just that the overwhelming mass of Internet data leans that way.
So, to the question at hand: AI consciousness.
You provide the insightful quote from Daniel Gilbert who says, “feelings don’t just matter – they are what mattering means.“ You also quote Jonathan Haidt's insight that AI lacks "intuition," or more precisely, the inability to make "rapid, effortless moral judgments."
All of these human qualities are slopped under the woolly term "feelings." This lack of precision just won't do, especially from so-called "scientists." Here is the precise heart of the matter: All of these qualities are rooted in CASUISTICAL EVENTS. For example, a baby is confronted with a steaming spoonful of food; it's very hot; he spits it out and cries. This event is important to the human less for registering the fact that steam might indicate heat, and more for the moral datum that indicates a value – that of possible human harm. This and every CAUISTICAL EVENT is a datum that enters the human "database" that must be labeled: What Is Of Value.
So the answer to the problem of AI consciousness is not to ponder the inscrutable labyrinth of defining consciousness. The answer is simply to build for AI not a database of facts, which it already has, imperfectly, on the Internet; the answer is to create a vast database of CASUISTICAL EVENTS that the AI robot consults to make value judgments. There would likely be many such CASUISTIC DATABASES, with each moral response shaped by specific cultures. That is, a Japanese AI robot might respond deferentially or politely to a moral event, creating a datum less aggressive than an American AI robot consulting a database containing a datum very different from that identical moral event.
Ho-hum, OK all done with this one. Can you give me something a bit more challenging next time?
Well, I can't say you solved the problem, but I can say I must have set the problem out poorly in this context. I do always appreciate when you engage, Dear T! I shall do better to frame the issue differently in the future.
First, you put scare quotes around "unsolvable" as if that's a term I used. I didn't, of course, because the problem could be solved in the future. It just hasn't been.
I have no idea what either a CASUISTICAL event is, nor a CAUISTICAL event, though I can only assume this has something to do with causality or casuistry, and I'm guessing it's the latter. That is, perhaps you mean casuistry in the sense of attempting to derive a more general abstraction from a particular instance, say, in the psychology of valuation/disvaluation.
By trying to solve a different problem, it reveals a rather 1930s-1960s quasi-behaviorist thinking. For example, you refer to a database with more quotation marks (presumably to prompt the reader to catch an analogy), because you want us to think of the brain as being rather like wetware and *labeling* values as being rather like software metatags, or something, probably because without such you'd just be utterly locked in black box behaviorism. We at least graduate to something approximating 1950s reductionism.
To clarify: What I am talking about is not intentionality (a species of *consciousness* that deals with volition, causation, and oneself standing in certain relationship to external objects. It is, instead, phenomenal consciousness I'm interested in. After all, everything you describe could probably be programmed but not *felt* or *experienced,* as in the "what it's like" qualities of phenomenal consciousness. This is how we're *appeared to* in our experience, which has close correlates in much of your quasi-behaviorist talk.
So all that's perfectly fine in terms of an early step into cognitive science, but we're asking a more deeply philosophical question than one rooted in past or future science. A future science might help us solve the problem of phenomenal consciousness, but as of now, it can tell us how to help AI have the experience of an orgasm. (Again, not the functional aspect of neuronal firings and bulbospongiosus muscles contracting -- the EXPERIENCE.)
Fascinating! But may be deeper than I can go, like swimming underwater without scuba gear.
Thanks guys for the volley, enjoyed all of it!
Max, I will spare you the typing of your next logical move. It is this:
"Aw, shucks, that AI ethical stuff done been done with Asimov's Laws of Robotics."
Hardly.
Allow me to patiently explain, grasshopper. (I'm thinking of changing my profile photo from Nietzsche to Master Po, ha ha.)
The fatal problem for the Laws of Robotics is that they presuppose that any particular, one-off, ad hoc moral response can be subsumed beneath 3 (actually 4) universal rules. Like all deontological, universal rules, they fail on particular, real-life instances. But human morality is not deduced from godlike abstractions. Humans do not acquire their moral sense in that deductive way. They acquire it as I have suggested: By the accumulation of thousands of casuistic cases, which form a moral sense that is intuitive, incapable of abstraction into fixed general rules.
There is a good Wikipedia article (https://en.wikipedia.org/wiki/Three_Laws_of_Robotics) that discusses the deontological failures of the "Laws of Robotics." It cites the ridiculous struggles of many sci-fi writers to overcome the Four Commandments brought down from mount Asenion by Asimov-Moses. Just for completeness, I give them here:
• A robot may not injure a human being or, through inaction, allow a human being to come to harm.
• A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
• A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
• Asimov later added a Fourth Law, a "Zeroth Law," stating that a robot must not harm humanity (opening up a can of worms even greater than the foregoing).
Notice the beautiful, logically nested sequence of "laws" – just like pretty Matryoshka dolls, so satisfying, so seductive for any would-be thinker (and "would-be" truly captures most dimwit sci-fi writers). But even the dullest knife in the drawer can see that problems are inevitable from the robots' attempt to evaluate "harm."
Deontology always fails; especially for the Asimov's ballyhooed Laws of Robotics.
OH, I see. You're talking about the alignment problem--not the problem of consciousness. That's cool! I wouldn't mind writing something about that at some point.
No, that's not my meaning. I'm not interested in the narrow correction of having a robot "doing as I intend, and not mechanically doing as I say."
I'm talking about the general problem of "morally aware" AI, which is the basis of conscious identity.
Max, Max, you always reach for complexirification (damn, if only we were in German, I could make up such words with nary an eyebrow raised!).
My solution is very simple, having nothing to do with behaviorism or any other "-ism." I mean CASUISTRY, not of course in the sense of "captious reasoning," but in the sense of "moral set-pieces," if I can put it that way. (My "CAUISTICAL" was a misspelling – it needs an "S.")
My meaning was given originally by the Jesuits, when evaluating the effect on the soul when confronted with a particular moral choice. For example (I don't know why this comes to mind, but anyway...), in Camus' novel _La Chute_ (The Fall), Clamence, on the way home from a night of fornication, crosses a bridge and passes a woman there. A moment later, he hears a splash and a cry, but he does nothing. What would be the "proper" moral response to such an event? THAT is what I mean by a CASUISTIC event. (Another, more famous, example from the Jesuits is casuistic question: "How many angels can dance on the head of a pin?" which addresses the question of whether a purely spiritual being has dimension.)
Thus a CASUISTIC DATABASE would have very simple entries like the one I provided for an infant presented with a steaming spoonful of food, in which case the moral datum answers the question of how steam answers the value question of "Is this wholesome or harmful to my life?"
The problem for an AI robot is that it CANNOT have values, since "value" must always imply the context of a living being that is contingently alive, that can possibly die. Robots cannot die, thus cannot have values. Human values are accumulated over a lifetime of what I call "casuistic events," that is, encounters where that contingency is made evident – in some cases only mildly, as when a tongue is burned on hot food; in other cases when confronted with the question of whether to rescue a woman who has fallen into the water.
I suppose I have to solve yet another "unsolvable" problem... yawn!
So here is your solution to "consciousness in AI."
Here is how AI currently has its "informational smarts": Prompted by a human input, It simply anticipates the next syntactically and statistically correct phrase based on an enormous mass of reputedly factual data on the Internet. I say "reputedly." AI does not really have a Leftist bias (e.g., in providing "woke" images of the Founding Fathers, etc.) – it's just that the overwhelming mass of Internet data leans that way.
So, to the question at hand: AI consciousness.
You provide the insightful quote from Daniel Gilbert who says, “feelings don’t just matter – they are what mattering means.“ You also quote Jonathan Haidt's insight that AI lacks "intuition," or more precisely, the inability to make "rapid, effortless moral judgments."
All of these human qualities are slopped under the woolly term "feelings." This lack of precision just won't do, especially from so-called "scientists." Here is the precise heart of the matter: All of these qualities are rooted in CASUISTICAL EVENTS. For example, a baby is confronted with a steaming spoonful of food; it's very hot; he spits it out and cries. This event is important to the human less for registering the fact that steam might indicate heat, and more for the moral datum that indicates a value – that of possible human harm. This and every CAUISTICAL EVENT is a datum that enters the human "database" that must be labeled: What Is Of Value.
So the answer to the problem of AI consciousness is not to ponder the inscrutable labyrinth of defining consciousness. The answer is simply to build for AI not a database of facts, which it already has, imperfectly, on the Internet; the answer is to create a vast database of CASUISTICAL EVENTS that the AI robot consults to make value judgments. There would likely be many such CASUISTIC DATABASES, with each moral response shaped by specific cultures. That is, a Japanese AI robot might respond deferentially or politely to a moral event, creating a datum less aggressive than an American AI robot consulting a database containing a datum very different from that identical moral event.
Ho-hum, OK all done with this one. Can you give me something a bit more challenging next time?
Well, I can't say you solved the problem, but I can say I must have set the problem out poorly in this context. I do always appreciate when you engage, Dear T! I shall do better to frame the issue differently in the future.
First, you put scare quotes around "unsolvable" as if that's a term I used. I didn't, of course, because the problem could be solved in the future. It just hasn't been.
I have no idea what either a CASUISTICAL event is, nor a CAUISTICAL event, though I can only assume this has something to do with causality or casuistry, and I'm guessing it's the latter. That is, perhaps you mean casuistry in the sense of attempting to derive a more general abstraction from a particular instance, say, in the psychology of valuation/disvaluation.
By trying to solve a different problem, it reveals a rather 1930s-1960s quasi-behaviorist thinking. For example, you refer to a database with more quotation marks (presumably to prompt the reader to catch an analogy), because you want us to think of the brain as being rather like wetware and *labeling* values as being rather like software metatags, or something, probably because without such you'd just be utterly locked in black box behaviorism. We at least graduate to something approximating 1950s reductionism.
To clarify: What I am talking about is not intentionality (a species of *consciousness* that deals with volition, causation, and oneself standing in certain relationship to external objects. It is, instead, phenomenal consciousness I'm interested in. After all, everything you describe could probably be programmed but not *felt* or *experienced,* as in the "what it's like" qualities of phenomenal consciousness. This is how we're *appeared to* in our experience, which has close correlates in much of your quasi-behaviorist talk.
So all that's perfectly fine in terms of an early step into cognitive science, but we're asking a more deeply philosophical question than one rooted in past or future science. A future science might help us solve the problem of phenomenal consciousness, but as of now, it can tell us how to help AI have the experience of an orgasm. (Again, not the functional aspect of neuronal firings and bulbospongiosus muscles contracting -- the EXPERIENCE.)