Uncle Sam + AI = Radical Power Asymmetries
Could AI make powerful central authorities even more powerful?

DMS: What sort of impact do you believe the rapid emergence of AI will have on statism as well as the future of humanity?
MB: For me, there are a lot of ifs and buts—which are not candy and nuts, apparently. Still, we might be okay if there’s more competition among AI projects, and these don't cartelize, either through collusion or state regulation. Furthermore, if we can see a couple of viable open-source AI projects emerge, we needn't fear the centralization that AI, especially segregated AI, could enable.
Certainly, we can imagine AI corporations training their Large Language Models (LLMs) or other AIs in a bespoke fashion to be used by authoritarian functionaries, but in a way that restricts ordinary people from using that power. Such would result in a horrible power asymmetry.
With open-source alternatives, though, we might be able to compete against GovCorp, which will surely demand our subordination. But, if the government creates AI satrapies through regulation and subsidy, those AI oligarchs will not have our best interests at heart (Just as G++gle doesn’t).
They will almost certainly serve the most powerful.
DMS: What should freedom-minded individuals be pondering with respect to adapting to this new digital terrain?
MB: We should all be learning how to use these tools while planning to migrate to open-source alternatives as quickly as possible. Maybe they'll enable us to self-organize and decentralize more readily.
I'm not terribly heartened that Alpaca was taken offline. Nor am I heartened by the idea that OpenAI is not open and is being heavily trained in critical social justice, for example, which is an illiberal doctrine that cannot solve the "alignment problem."
Remember the saying: We shape our tools, and then our tools shape us.
Something similar can be said about centralized AI: They train their tools, and then their tools train us.
Yikes!
OpenAI was created as an open source (which is why I named it “Open” AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft.
Not what I intended at all. —Elon Musk
DMS: How do the recent developments in AI add to or alter the message you share in your book After Collapse?
MB: So, to recap the main parts of After Collapse, I worry too many factors are hastening the fall of the American Empire and, to some extent, our socio-economy. I set out my concerns in the seven chapters of Part One. Then, in Part Two, I suggest seven change vectors to decentralize and self-organize peacefully to evade the excesses of authoritarian power. Such will be necessary as the phoenix comes up from the proverbial flames. I refer to this as New America, a decidedly less imperial and less top-heavy version of the USA.
Now, what follows might seem like a rather strange, perhaps self-serving answer, but in many ways, After Collapse adds to my earlier thoughts about AI in the 2018 book, The Social Singularity. This book is my most successful to date, and I've changed my views very little.
Folks can read some of my most up-to-date writings on AI in "Dancing With Our Robot Overlords," though I have a bit more to add at my Substack. I will soon. But if I had to express my hopes for the future of AI in a few bullets, they might be something like this:
AI does not yet possess or process local knowledge, which is subjective, human, and prompts our actions in unique circumstances.
AI, properly trained and applied, can help us facilitate decentralization, even as it seems to synthesize and centralize knowledge.
AI might not continue to improve exponentially; it might instead be an S-curve phenomenon—at least for a while.
AI is probably not going to be as dangerous as the people who wield it—at least not for a while. Maybe we have time to figure out peace, prosperity, and morality before we put something so powerful into the hands of a sociopath eager to become Shiva, "destroyer of worlds."
DMS: Any concluding thoughts?
MB: AI, especially an open-source form, could converge with better human incentive systems (such as distributed ledgers) and interface with our brains to let us become crypto cyborgs. This is the Elon Musk vision plus the Satoshi Nakamoto vision wrapped up to make a Max Borders vision that will probably cause some readers to turn and run. Still, this vision is better than an even more dystopian alternative—namely, one more closely resembling the Matrix.
I admit that the last one is out there, even for me. But I have a hard time imagining a world where we just ban AI and call it a day. The djinn is now out of the bottle. So, how will we adapt? Answers to such questions turn in my mind. I hope I can prompt them to turn in the minds of others as well.
Find more author interviews and book reflections at Great Books, Great Minds.
Help me keep bringing daily subversive content written by me, not GPT.
"I remain Switzerland, and realist. But dispositionally, I default to optimism".
Just so. I think that you state the only worthwhile path.
My point is that there is a normative driver in all "reasoning." There is no reasoning in a void; we are not thinking spiders. My fear is that AI seems to be the last bunker for your previously cited "B. Enlightenment Modernism" and its universalist absolutism, from which the state derives legitimacy as its presumed enforcer on broadly construed policy questions.
For the moment, I would be pleased if everyone recognized that there is a normative qualifier attached to every ChatGPT request, whether it is stated explicitly, or foolishly accepted by default. In other words, we must say, "TurkGPT, what is the truth value of [policy X]," ADDING "from the perspective of [Y]?"
where Y is "Thomas Jefferson" or "anarcho-capitalism" -- or "Sergey Nechaev" for that matter. For if you do not add the qualifier, some black box process will do it for you, duping you into the belief that you have the One Final Truth.
For the future, it seems that there are two paths open -- and I await evidence on which one is likely to be taken. Either AI turns humans into the Lotus Eaters of the Odyssey or the Eloi of H.G. Well's _The Time Machine_, or AI releases humans from drudgery, allowing the flourishing of an Isaac Newton or a Dante or a J.S. Bach.
David Deutsch's take on AI is in my opinion the best. We are not worried about AGI because AI cannot be unpredictable, defiant, or do anything against its code.
We are worried about authorities using it against us, and also using it as a boogeyman for events in order to gain more control of the internet.