Are We the Last Generation Who Knows How to Do Hard Things On Our Own–And Why It Matters.

The topic raised in this week’s post on AI deserves a deeper dive, as the consequences will be profound and quite possibly shattering. In AI for Dummies: AI Turns Us Into Dummies, I noted:

1. Those who received real educations can use AI because they know enough to double-check it, but the younger generations using AI as a substitute for real learning will never develop this capacity.

2. Those who actually have mastery can use AI and not realize the point I’m making isn’t that AI is useless, the point is it fatally undermines real learning and thinking.

In other words, this is another example of the fish not seeing the water they’re swimming in: those who already have deep experiential knowledge of programming can use AI as a tool because they actually understand the system they’re working within. But someone who is cobbling together coding via AI prompts who doesn’t actually “know” programming as experiential “tacit knowledge” lacks the capacity to double-check the AI-generated coding or understand its potential impact on the entire system.

Those with experience don’t see this because they already have the experience. They cannot grasp the limited understanding of those with little experience and no in-depth education in the field.

This can be said of every complex field of human endeavor that requires a level of mastery gained from experience.

This superficial understanding of the entire system creates systemic weaknesses that few see. Once the last generation that had the skills, tacit knowledge and experience to remake the system from scratch or modify it on a structural / systemic level have retired or passed on, there’s nobody left who has the capacity to manage crises that are either beyond the reach of AI or are generated by AI.

The process of passing on essential systemic, organizational knowledge is what AI threatens, because passing on essential knowledge requires expensive learning and mentoring that are easily trimmed under the misguided belief that AI can do all of that.

So companies aren’t hiring interns because AI can do the scut-work interns were paid to do much cheaper.

The problem is critical systemic knowledge is tacit knowledge, meaning it can only be acquired by long experience from the bottom up. This is why the offspring of the founder started out working on the assembly line and then moved to accounting, then to marketing, and so on, as this was the only way to understand the company in any remotely useful fashion.

Having AI summarize the company’s operations teaches the future managers nothing of any real value.

This is why IBM required its new key employees to spend a full year in training before they began their actual jobs. This was not a result of IBM foolishly choosing to waste money, it was an essential investment in the corporation’s future productivity and profits.

The current belief is that 1) AI will only get better in every way with no limits, and 2) we can dispense with the training required to gain a complete set of knowledge bases because AI can do all the work cheaper than humans.

That AI might have intrinsic limits is not acceptable to this mindset, as is the possibility that humans still have capacities that are essential to complex organizations and real-world tasks that cannot be replaced by AI.

This stripping out of broad experiential knowledge is already playing out: AI Is Wrecking an Already Fragile Job Market for College Graduates (WSJ.com, paywalled)
Companies have long leaned on entry-level workers to do grunt work that doubles as on-the-job training. Now ChatGPT and other bots can do many of those chores.

“Ernst said his research shows that workers mostly learn through experience, and then the remainder comes from relationships and development. When AI can produce in seconds a report that previously would have taken a young employee days or weeks—teaching that person critical skills along the way—companies will have to learn to train that person differently.”

This chart shows how the unemployment rate of the most educated classes is rising faster than the less educated as those aspects of white-collar work that can be automated are automated.

As companies no longer invest in training employees other than how to use AI, this leads to a cognitive elite–those few who managed to gain the required tacit knowledge of the entire field.

What Happens When Only A Few People Can Still Think: The Rise Of The Cognitive Elite (free)

Essayist Samo Burja takes all this one level deeper, exploring how this stripping out of real learning / systemic understanding and relentless compartmentalization leads to collapse few recognize even when it is well underway.

Why Civilizations Collapse (free)

“By fragmenting available knowledge, you can leverage information asymmetries to the intellectual or material advantage of the center. Some of this is necessary for scaling organizations beyond what socially connected networks can manage—but move too far towards compartmentalization, and it becomes impossible to accomplish the original mission of the organization.

Civilizational collapse, then, looks like this dynamic at the scale of an entire civilization: a low-grade but constant loss of capabilities and knowledge throughout the most critical parts of our institutions, that eventually degrades our ability to perpetuate society.

Avoiding collapse is so difficult because succession failure is often opaque. If the Institute of Pottery lost the ability to make good pots—to mold people into skilled pot makers—would they declare it to the world? Of course not—institutions are very rarely self-abolishing. The intellectual apocalypse is invisible if there are no true intellectuals around. Again, institutional failure typically comes as a surprise.

Loss of knowledge is especially damaging, since it accelerates the other aspects of collapse and ensures that they will be long-lasting.

Knowledge of these social technologies is highly compartmentalized and, as a result, they are not understood explicitly by all parts of society. This means that a society undergoing an Intellectual Dark Age doesn’t realize it is going through one at all—all the people who would notice are long-gone, and those who remain are miseducated, role-playing the forms left behind by their predecessors without realizing that they’ve lost the substance. Often not just the knowledge, but the socioeconomic niche that once fostered the creation of new social technology has been obliterated in all but name.”

I touched on this subtle erosion of experiential knowledge in discussing the catastrophic Lahaina Fire, which at least in part could be traced to institutional loss of essential knowledge: since nobody had any experience in dealing with fast-moving crises, they were ill-prepared to respond effectively.

In relying on AI to perform all the bits and pieces, there’s no apparent value to investing in humans acquiring a complete, functionally useful knowledge of the entire system or field.

What’s lost when the last generation who knew enough to radically redesign the system (what Burja calls social technology) from the ground up are gone is not on anyone’s screens, because the assumption is that AI can redesign anything and everything from the ground up and make it stick.

This is akin to reckoning that a brief AI summary of “capitalism” replaces reading the three thick volumes of Braudel’s Civilization and Capitalism, 15th-18th Century, or that AI composing a paper is a replacement for reading all the source materials in their original and doing the hard work of writing an essay, which isn’t simply stringing together words and sentences, it is thinking in its most important and elemental processes.

I will end with two anecdotal observations.

1) Most of the AI uses I hear about are very proscribed snippets: summaries, lists of marketing ideas, bits of coding, a travel plan, a song or graphic and so on. In terms of profound transformation, I personally know of only two cases, and both people are in their late 50s with a vast range of lived experience.

I conclude that AI can really only be used in a transformative way by those with decades of life experience, as this requires exploring the topic in a lengthy dialog drawing upon all that lived experience, a dialog that might stretch into hundreds of pages as the individual ponders the AI’s responses with the full power of their tacit knowledge and intuitive, gestalt thinking.

2) The assumption of the techno-idealists is that AI will conjure up a perfect solution to every human problem, toss it over the wall and humans will avidly put the solution into action.

That humans might need leaders, values, institutions, purpose, meaning, experiential knowledge and complex social technologies–all the things AI undermines, weakens or dismantles— to even start taking any sort of collective action seems to be the water we’re swimming in that nobody even sees.

If we’re the last generation who knows how to do hard things on our own, beyond us lies an abyss.
CHS NOTE: If your curiosity and interests range far and wide, then you may have found a home here. As icing on the (healthy) cake, perhaps something here may change your life in some useful way. Writing is my only paid work/job, and I am honored by your readership and financial support.


Read more

Similar Posts