9 Comments

"I remain Switzerland, and realist. But dispositionally, I default to optimism".

Just so. I think that you state the only worthwhile path.

My point is that there is a normative driver in all "reasoning." There is no reasoning in a void; we are not thinking spiders. My fear is that AI seems to be the last bunker for your previously cited "B. Enlightenment Modernism" and its universalist absolutism, from which the state derives legitimacy as its presumed enforcer on broadly construed policy questions.

For the moment, I would be pleased if everyone recognized that there is a normative qualifier attached to every ChatGPT request, whether it is stated explicitly, or foolishly accepted by default. In other words, we must say, "TurkGPT, what is the truth value of [policy X]," ADDING "from the perspective of [Y]?"

where Y is "Thomas Jefferson" or "anarcho-capitalism" -- or "Sergey Nechaev" for that matter. For if you do not add the qualifier, some black box process will do it for you, duping you into the belief that you have the One Final Truth.

For the future, it seems that there are two paths open -- and I await evidence on which one is likely to be taken. Either AI turns humans into the Lotus Eaters of the Odyssey or the Eloi of H.G. Well's _The Time Machine_, or AI releases humans from drudgery, allowing the flourishing of an Isaac Newton or a Dante or a J.S. Bach.

Expand full comment

David Deutsch's take on AI is in my opinion the best. We are not worried about AGI because AI cannot be unpredictable, defiant, or do anything against its code.

We are worried about authorities using it against us, and also using it as a boogeyman for events in order to gain more control of the internet.

Expand full comment

Agreed that Nietzsche's "madness" was syphilitic, and not the result of "thinking thoughts too great for man" -- ha, a romantic picture like something from Caspar David Friedrich.

But one meaning of "reason" is "to give an account." How does AI account for the results it offers? Is there anything more lame than for a human being to give an account by saying "I read it on the Internet somewhere"? And yet that's the very account that IA gives, cloaked in the white lab coat of accurate grammar and dispassionate, nonpartisan evenhandedness.

So what is the AI account? It can't be a syllogistic account because no one can claim that AI uses concepts or thinks in abstractions. Whereas a human might offer "A dog belongs to the domestic subclass of animals," AI can hallucinate that your pet Monchi* is a pig or a loaf of bread, or claim that a dog is "two men on skis" (a documented error**) simply because its sampling of Internet photos said so.

Who is to tweak these errors? A human coder. And one can well imagine a clever Indian like Apu Nahasapeemapetilon popping out of the box like a Mechanical Turk, laughing, saying "I work for the state, Sahib; it pays most abundantly, and it is most pleasing to provide the answer that the state advises will keep the peace."

*From the movie "The Mitchells vs the Machines" https://www.youtube.com/watch?v=7T06Mml2-jE

** https://www.wired.com/story/ai-has-a-hallucination-problem-thats-proving-tough-to-fix/

Expand full comment

Max,

Agreed that Nietzsche's "madness" was syphilitic, and not the result of "thinking thoughts too great for man" -- ha, a romantic picture like something from Caspar David Friedrich.

But one meaning of "reason" is "to give an account." How does AI account for the results it offers? Is there anything more lame than for a human being to give an account by saying "I read it on the Internet somewhere"? And yet that's the very account that IA gives, cloaked in the white lab coat of accurate grammar and dispassionate, nonpartisan evenhandedness.

So what is the AI account? It can't be syllogistic account because no one can claim that AI uses concepts or thinks in abstractions. Whereas a human might offer "A dog belongs to the domestic subclass of animals," AI can hallucinate that your pet Monchi is a pig or a loaf of bread, or claim that a dog is "two men on skis" (a documented error) simply because its sampling of Internet photos said so.

Who is to tweak these errors? A human coder. And one can well imagine a clever Indian like Apu Nahasapeemapetilon popping out of the box like a Mechanical Turk, laughing, saying "I work for the state, Sahib; it pays most abundantly, and it is most pleasing to provide the answer that the state advises will keep the peace."

Expand full comment

I love the Mechanical Turk metaphor. It's so true, right now especially. But I wouldn't underestimate both the promise and perils of exponential technologies. We might see an S-curve, but that too will give way to another S-curve. So I wouldn't assume that the progression from Mechanical Turk to TurkGPT is going to stop and wait for us to launch our critiques.

It's all going to evolve. It's going to find and apply meta-mechanisms of its own learning and evolution. It's going to complete a thousand iteration cycles before next week. What are humans if but corporeal iteration cycles? Now, "Who is to tweak these errors?" Not just human coders. AI, itself.

Now, I guess my question for you is: What do you propose? Are we to going to sit back, watch it all unfold, congratulating ourselves for having the comparative creativity to extrapolate from Zoltar? Are we to lobby to see it banned? Or are we to waggle our fingers at its existence but not engage it or participate in its evolution?

I admit that it it functions based on a narrow conception of reason and reinforces another narrow conception of reason in us. But so what? Can it, or we, broaden that conception? Can we train it to prompt wisdom in us, or even be wise? Or are we going to see it continue on some dark trajectory, like the Borg, combining Machine and Mammon and Man, until it becomes something we can scarcely control but self-replicates and eats the world like a universal acid? I don't know.

In the debates between the optimists and the pessimists, I remain Switzerland, and realist. But dispositionally, I default to optimism because pessimism rarely does anyone any good. My realist side says this is going to improve, evolve, and could get away from us. But before that, it could very well get into the wrong hands, like Fauci with billions of dollars and an small army of gain-of-function researchers. There are many more black balls among the white balls when there are a billion users and a billion more improvements.

Expand full comment

Max,

One thing explains the hyped appeal of “AI”: The conviction that reason – supposedly man’s defining characteristic – is hopelessly biased, and that there can exist some form of “reason” purged of those biases.

But no immaculate “reason” sterilized of all “bias” can possibly exist.

This is not just a negative truth. Many proclaim, “Let reason be our guide, wherever it may lead us,” and behold, the Christian speaker justifies Christianity, the Muslim justifies Mohammad, the endowed justifies his benefactor, the aggrieved justifies his violence, and the ignorant justifies his nonsense.

It is also a positive truth: The “bug” is in fact a feature. Human reasoning is tentative, contingent, and collegial – not final and immutable. The worship of “AI” is an artifact of that myth of a godlike “reason” that immersion in the current culture makes difficult to shake off. The persistence of that mental artifact compels most to have a godlike State, equipped with “AI,” stand in for the long-absent Author of All Being.

Expand full comment

I think one can and should agree with your assessment here. I do. But I still think the manner in which an LLM is trained will amplify its biases. In other words, in human history we went from:

> A. Pre-modern Traditionalism: God is the Source of Truth

> B. Enlightenment Modernism: Reason is the Source of Truth

> C. Postmodern Relativism: There is no Truth, truth is Relative

> D. Metamodernism: Discover truth's facets via multiple perspectives, using multiple modalities.

Most of the West (and ChatGPT) is stuck at B. and C., when we need to improve methods for D. AI, with the proper training, could very well help us with D.

Expand full comment

ABC: Concise and accurate. As for D, I'm open to evidence, but skeptical.

For example, if all AI are equally "smart," then they must reach a "consensus" single conclusion on very contentious topics. Are we to genuflect before this paradigm of "reason," this supposed Mr. Spock who delivers the unanswerable Perfectly Reasoned Final Truth? No. Real reasoning is not like that. The climate debate fosters lively disagreement -- passionate debate that is sometimes rooted in unstated biases -- all of which is essential in getting those "multiple perspectives" that the truth requires.

The alternative is that all AI are NOT equally "smart," in which case we're back to a war of biases. As it currently stands, when ChatGPT reaches the conclusion, for example, that whiter women are considered more beautiful than darker ones (a view corroborated by African tribes, BTW), a very flesh-and-blood human steps in and "corrects" the so-called "error."

But far more common is that AI gives a bland pablum nonjudgmental answer -- one that reminds me of a very irritating Miss Proper in elementary school: "Johnny should not have punched Suzie in the mouth, but a number of points might be cited to excuse his behavior. And of course we must give due consideration to Suzie, who after all was minding her own business." Such "reasoning" is far more infuriating to me than someone who defends his views with passion. Such "reasoning" is exactly what Nietzsche meant when he said that "Socrates is mob." That is, it is "reasoning" by a mind totally indifferent, that is willing to accept as "truth" whatever a roomful of average thinkers reach by consensus.

Expand full comment

Haha, I love Nietzsche, but he can throw out the proverbial baby, either when he waggles his uebermensch dick in our faces or in moments of syphilitic fever. There has to be a way to orient some future AI to train on different perspectives while neither blurring them too much, nor abandoning reason as a process (reasoning). We might even be able to consider some synthesis reasoning without going all Hegel. But "Are we to genuflect before this paradigm of "reason," this supposed Mr. Spock who delivers the unanswerable Perfectly Reasoned Final Truth?" I took away capital T for a reason in C and D. In fact, I'd put myself somewhere between Quine and Rorty in terms of knowledge and truth, and I'd be a Humean if I were alive in the 18th century.

Anyway, I too have found the GPT results to be rather like pabulum. And I can quickly sniff it out when a "writer" uses it. So, no, we shouldn't genuflect to anything, much less an amalgam of mediocrity. But NO WHERE did I suggest anyone be "willing to accept as 'truth' whatever a roomful of average thinkers reach by consensus." What I suggest is that some future - much improved - AI should be able to prompt IN US a facility to discover FACETS of a greater, otherwise more inaccessible truth by learning to take on (try on) multiple perspectives, hold them in reflective juxtaposition, try to reconcile their apparent contradictions, and arrive at an understanding that is, however limited, more subtle and more complex than the usual reductive pap that passes for Truth these days. That takes reflection, reasoning, wisdom, and, indeed, passion. But I suspect we are a long way from that form of AI -- mainly because it will require developers who are so capable. And developers tend to be SF left-brained types, with social justice bees in their bonnets.

Expand full comment