Experts, AI, and a New Social Contract

“We hear much these days about the remarkable new thinking machines. We are told that these machines can be made to take over much of men's thinking…eventually about the only economic value of brains left would be in the creative thinking of which they are capable…"

Psychologist J. P. Guilford, 1950

In a grand concert hall in the early 19th century, you could find a violin virtuoso hypnotizing the crowd with furious technique. Over time, history would come to uplift the composer of the symphony far more than the performer who merely executed it. That old shift – from valuing technical mastery to valuing creative direction and personal expression – is happening again before our eyes. Today, the “instruments” being played are programming languages, design software, data science, and a plethora of other technical skills that traditionally require training and formal education. The virtuoso experts who spent years mastering one “instrument” are finding themselves upstaged by a new kind of talent: the composer who can conduct with AI tools in harmony to create something greater. We are entering the Age of the Composer and leaving the Age of the Expert, where the ability to synthesize, direct, and innovate across domains trumps narrow expertise.

The rise of advanced AI is reshaping what we value in work and society. In this (admittedly lengthy) post, I’ll examine the future of productivity, the fate of experts, the widening gap between the tech “haves” and “have-nots,” and whether humans will find purpose in a world with less traditional work. Using historical parallels to recent cultural signals, I’m going to make an honest attempt at giving an early-stage prediction of what our AI future will look like.

If you’re reading this in 2050:

Mission accomplished - knew I was onto something…

Value in an AI World

In past eras, being an expert specialist was the height of achievement. Masters of one craft – doctors, engineers, accountants, elite programmers – commanded great respect and were rewarded generously for it. Deep knowledge in a narrow domain was the ticket to the upper echelons of professional life. Not surprisingly, technology has a way of flipping the script on what skills are valuable. When new tools appear that automate or augment a skill, the spotlight of value shifts upward: from execution to strategy, from technique to creativity, from the virtuoso to the composer.

Returning to our music analogy, let’s not forget that, in the decades leading up to the Romantic Period, the world went mad for virtuosos. Audiences swooned at Paganini’s devilish violin caprices and Liszt’s piano prowess. They were the pop stars of their day, touring from city to city, dazzling crowds with heroic displays of skill. Even then, critics sniffed that sheer technique was “empty virtuosity”, impressive but lacking substance. True art, they said, came from the composers – those with creative vision and emotional depth – rather than the performers with “agile fingers and an empty head” (just like me fr). Today, we revere the composers of that era (Beethoven, Chopin, etc.) far more than the once-famous performers. The lesson: technical skill alone, however dazzling, can be taken for granted.

We saw a similar shift with the rise of calculation tools. There was a time when being a human “calculator” – like someone who could multiply 15,493 by 10,385 in their head – was a marketable skill. Entire job titles, like “computers”, were given to people (often women) who performed math by hand for governments and labs. Speed and accuracy in arithmetic were highly prized. When electronic calculators and computers arrived, that expertise was swiftly automated. Complex calculations that once took teams of humans weeks could be done by a machine in seconds. Overnight, mental math virtuosity went from impressive to irrelevant. The value shifted to knowing which problems to solve, which numbers to crunch; in other words, understanding and framing the problem became more important than doing the calculation itself.

Today’s generative AI is causing a seismic shift analogous to the calculator – but across many fields at once, which is what makes this newest tech revolution truly disruptive. A large language model can write code, draft legal contracts, compose music, design graphics, even diagnose medical images. Tasks that used to require years of study, apprenticeship, and practice are suddenly accessible to anyone who can describe what they want in plain language. In other words, AI is lowering the barrier of technical expertise in field after field. If you can articulate a vision, AI can help execute it.

This democratization of skill puts a new premium on what we might call “compositional” talent. It’s no longer so impressive to know the syntax of a programming language or the technical intricacies of graphic design software – an AI assistant can handle a lot of that. Instead, the impressive (and scarce) ability is knowing what to build, how to tie disciplines together, and why it matters. In career terms, the advantage shifts from the specialist to the generalist. As one 2025 future-of-work report put it: “As AI accelerates the process, replacing well-defined work, the defining skill of the modern era isn’t deep expertise in one narrow field. It’s the ability to think across disciplines, adapt to new challenges, and connect ideas… This is the age of the generalist.”

Generalists (at Stanford we called them product designers) excel by seeing the big picture and making the creative leaps that machines, confined to patterns of the past, wouldn’t think to make. They thrive in ambiguity and draw on knowledge from many areas. In practical terms, the new “composers” are those who can, for example, understand a customer need (design thinking, empathy), envision a solution that draws on multiple fields, and then use AI tools to generate pieces of that solution in code, art, writing, etc., synthesizing the pieces into a coherent product. They are tastemakers and orchestrators. If each narrow AI is like a virtuoso playing one instrument, the human composer is the one writing the symphony and conducting the ensemble.

Consider software development. Ten years ago, a star software engineer was valued for deep knowledge of data structures, mastery of algorithms, and the ability to write flawless code by hand. Today, code-generation AI can spit out decent code for many routine needs. The emerging valuable skill is knowing what to build: understanding user needs, product sense, and the architecture of systems – then integrating AI-generated modules and human oversight to make it all work. The “expert” coder who only knows one library deeply might find themselves outpaced by a more versatile technologist who knows a bit of design, a bit of business, and how to leverage AI to fill in technical gaps.

“This Time iT’s Different”

Throughout history, technological advancements have always triggered dire predictions of mass job loss – yet often created new forms of work and greater productivity in the long run. The cotton gin, the assembly line, the computer: each obsoleted certain roles but ultimately gave rise to new industries and opportunities. So, is AI just another wave of automation we’ll adapt to, or is it fundamentally different?

Previous tech revolutions primarily automated manual labor and routine cognitive work. Machines took over repetitive, dangerous, or calculation-heavy tasks, augmenting human productivity. For example, 19th-century textile machines displaced weavers but created factory jobs; 20th-century office software eliminated typing pools and file clerks but created new roles in IT and data management. Historically, while some jobs were lost, others were created, and productivity gains eventually raised overall wealth (though not without painful transitions). Notably, many technologies complemented human labor rather than purely substituting for it (e.g. an excavator made a construction crew more productive rather than replacing all construction workers).

AI, however, has the potential to encroach on cognitive, creative, and decision-making tasks that were long considered the exclusive domain of humans. The breadth and speed of this encroachment is unprecedented. AI isn’t just one machine for one job; it’s a general-purpose capability that can be deployed in every industry, potentially affecting work from the factory floor to the C-suite. As researchers Acemoglu and Johnson noted in late 2023, “AI technology is relatively new, but the impacts of previous high-impact innovations like the steam engine and digital computer shed light on what could happen. When choices [about technology deployment] are left entirely in the hands of a small elite, expect that group to receive most of the benefits, while everyone else bears the costs.” In other words, who benefits from AI – and whether it’s used to complement workers or simply replace them – is a choice society will have to grapple with in the coming years.

Right now, many companies seem to be choosing a path of automation over augmentation. We already see AI being used to eliminate roles like copywriters, customer service reps, basic coders, and even parts of creative industries (like AI-generated marketing content). Sam Altman (czar of OpenAI, potential supreme leader of the world if GPT 5 becomes sentient) bluntly admitted, “Jobs are definitely going to go away, full stop.” White-collar professions once thought safe (like law, journalism, or medicine) have seen AI prototypes perform tasks like legal contract analysis or medical image diagnosis with superhuman speed. There are countless jobs with routine elements or writing tasks that are likely to disappear or be radically reshaped by AI.

If we follow the “just automate” path, the consequences will mirror the downsides of the last few decades of automation, but on WWE Super Smackdown mega steroids. Automation without safeguards hollows out middle-class jobs and concentrates wealth. We saw this with manufacturing in the late 20th century: industrial robots and offshoring reduced factory employment and contributed to a decline in stable, well-paid blue-collar jobs. In the U.S., manufacturing employment plunged and inequality surged from the 1980s onward. The gains went to capital owners and top executives, while many workers were left with lower-paying service jobs or dropped out of the labor force. If AI now does the same to many white-collar and creative professions, we could see an even more extreme polarization. Without intervention, AI could drive a greater gap between capital and labor, more inequality between the professional class and everyone else, and fewer good jobs overall. In blunt terms, the rich (who own AI companies or know how to leverage AI at scale) get richer, while many others find their skills devalued.

Aside from its scope, AI’s speed of impact could outpace our ability to adapt. In past revolutions, a generation or two had time to retrain and new roles emerged gradually. AI’s advances are coming in a rush. It’s conceivable that within a single decade, entire layers of skilled employment could be wiped out or radically changed, with new job creation (e.g. AI ethicist, prompt engineer, etc.) not keeping pace in quantity or accessibility. There’s also the fear that AI can do many new jobs it creates. For instance, if AI creates a boom in automated healthcare diagnostics, you might hire AI developers or data scientists – but guess what, AI itself can help write code and analyze data, reducing those opportunities too. A far cry from my CS106B lectures, this is recursive job loss manifest.

Fortunately, the future isn’t predetermined. Acemoglu and others advocate for a second path: use AI to augment human workers, not just replace them. In an augmentation scenario, AI would take over the tedious 80% of a job and leave the most human 20% to the worker – making them more productive and free to focus on creativity, interpersonal interaction, and high-level problem-solving (the 9 to 5 grind, but pareto efficient). This could create new tasks and roles we haven’t even imagined, much as the computer revolution gave us everything from web designers to Uber drivers. For example, AI in medicine could handle analyzing symptoms and scanning literature, enabling doctors to spend more time on personalized patient care and complex ethical decisions. Teachers could use AI to grade routine assignments and develop tailored lesson plans, while they focus on mentoring and one-on-one coaching of students.

The optimistic view is that productivity gains from AI could be so massive that society overall becomes much wealthier, potentially allowing everyone to work less or in more fulfilling ways. This was foreseen by the economist John Maynard Keynes as far back as 1930 – he imagined that by now, we’d have a 15-hour workweek and oodles of leisure, thanks to technology. That hasn’t happened yet (instead we just keep raising consumption and expectations), but AI could finally force the issue: if much of today’s work can be done by machines, perhaps humans won’t need to hustle as much. If the wealth is shared, it could mean liberation from drudgery.

Will the wealth be shared? That’s the trillion-dollar question. So far, signs point to trouble: Big Tech companies are racing ahead, and the benefits of AI seem to be accruing to their shareholders and executives, not the displaced workers. We’re already seeing AI titans become even richer (the net worth of AI company founders and VCs is skyrocketing), while reports of layoffs in fields from publishing to programming are popping up.

Society may need deliberate interventions – new policies, social safety nets, and educational overhauls – to ensure this revolution doesn’t just create a new class of “AI overlords” and a large underclass. Ideas like universal basic income (UBI) have moved from fringe to mainstream discussion precisely because of AI. Tech leaders like Elon Musk and Sam Altman have endorsed UBI as a solution if AI eliminates huge numbers of jobs. Musk predicts “there’s a pretty good chance we end up with universal basic income, due to automation… I’m not sure what else one would do.”. He even foresees an eventual world of universal high income, where AI-driven productivity is so great that everyone receives a comfortable living stipend – but he immediately flags the core issue: “It is less clear how we will find meaning in a world where work is optional.” We’ll revisit that existential question later.

AI is redefining productivity and skills at a breakneck pace. The value is flowing to those who can leverage compositional knowledge using AI tools and to those who control the AI platforms themselves. Meanwhile, many traditional experts may feel their worth eroding as their once-rare skills become automated features. As a society, we face a choice: double down on pure automation for short-term efficiency (risking greater inequality and social strife) or intentionally steer AI to complement human workers, retrain people for new generalist roles, and share the bounty (through mechanisms like shorter workweeks or UBI). The path we choose will deeply influence what we deem valuable in the future: Will human labor be valued only in residual niches that AI hasn’t taken, or will we value new forms of human contribution (creativity, empathy, community building) precisely because AI handles the rest?

One thing is clear: the age of the expert ending. Just as calculators made mental arithmetic a party trick, AI is making a lot of white-collar expertise a commodity. The premium will go to those who can compose something meaningful and original out of the readily available building blocks.

That raises some profound questions about social structure, fairness, and purpose.

The New Haves and Have-Nots

If AI does indeed reduce the need for human labor on a broad scale, what happens to our classic economic ladder? The old promise of industrial and digital capitalism was that technology improves productivity, which grows the economy, which (with some lag) creates new jobs and opportunities. In recent decades, that ladder has been missing some rungs for many people. Wealth inequality has been widening, and social mobility has stagnated or declined in many countries. In the United States, socioeconomic status has actually become harder to change in the past 50 years. One long-term study found that intergenerational mobility declined substantially for those born after the 1940s, meaning a child born in a lower-income family in 1980 has a lower chance of outranking their parents’ status than a child born in 1940 did. That’s a striking reversal of the mid-20th-century trend when education and industrial growth boosted many into the middle class. The steadfast promise of the “American Dream” has become more of a dice roll for those born in the last 50 years.

History shows that in most eras, if you were born poor, you were likely to stay poor. The modern post-WWII era offered a rare boom of middle-class expansion, fueled by deliberate policies (like strong labor unions, education investment) and the fact that technology was complementing a lot of human labor (making workers more productive and thus able to be paid more). Unfortunately, since the 1980s, even before AI, we’ve seen technology and globalization hollow out stable jobs and funnel wealth toward capital owners. AI threatens to turbocharge that dynamic. If left unchecked, it could create an economy where a relatively small group of people who own the AI infrastructure (and the data and compute power) accumulate massive wealth and everyone else struggles to find work that pays living wages. We can’t all drive for Instacart; there’s just not enough Whole Foods 365 Organic Hummus to be delivered.

This fear is part of what’s driving serious discussions of universal basic income. The logic is: if the economy can produce abundant wealth with minimal human labor, then why not ensure everyone benefits by giving all citizens a baseline income. It’s essentially distributing some of the AI dividend to the population, so that even if you personally don’t have a traditional “job,” you can still share in the prosperity and live a decent life. There are modern examples we can point to as a basis for this system; Alaska distributes a portion of its oil revenue to state residents through the annual Alaska Permanent Fund Dividend (PFD). The residents who bear the burden of oil projects also reap the rewards. Last year, that reward was $1702 - a far cry from UBI but a solid start towards that mindset. UBI would be a profound rethinking of the social contract – decoupling survival from employment.

Not long ago, UBI was a fringe idea, the stuff of sci-fi or ultra-utopian manifestos. Now it’s being taken seriously in policy circles and even piloted in the real world. This shift is partly because tech leaders themselves are endorsing it. It’s both ironic and fitting: the same barons who automated away jobs are proposing UBI as a salve (whether out of altruism or as a way to quell the pitchforks is up for debate). Essentially, UBI is doomsday prep for tech billionaires. In the past few years, dozens of UBI experiments have been launched around the world, including the largest-ever pilot in the U.S. backed by OpenAI’s CEO Sam Altman. The results have been cautiously promising: in one three-year trial, $1,000 monthly stipends given to low-income Americans led people to spend more on essentials like rent and food, without making them drop out of the workforce en masse. In fact, recipients only reduced their working hours slightly, often to pursue education or better align work with their lives. More importantly, the cash provided flexibility and autonomy, a buffer against the constant precarity many face.

Let’s imagine for a moment a future where UBI or some variant is in place. Suppose every adult gets a basic stipend sufficient for housing, food, and essentials. If you choose not to work, you won’t live luxuriously, but you won’t starve or be homeless. This might address poverty, but it raises new questions: What about ambition? What about meaning? In today’s ethos (especially in places like the U.S.), there’s a deeply ingrained idea that through hard work, you can improve your lot – get a better house, provide more for your kids, move “up.” If basic needs are met but upward mobility is frozen (because the only “jobs” are either token assignments or gig work paying a bit extra, while the big money accrues to AI owners), do we risk creating a stagnant class system? Historically, when societies had rigid class lines – e.g. aristocracies vs peasants – it bred resentment, unrest, or societal stagnation.

On the other hand, maybe we invent new ladders to climb (ones that aren’t pegged to a paycheck). In a post-AI world, maybe status comes from creative feats, community impact, or deep mastery in things that don’t scream monetizable. Unfortunately, once you turn purpose into points, you're halfway to a social credit score. Although the end state is hard to imagine, it makes sense that people will pour their ambition into fields like art, research, volunteering, or community leadership, rather than climbing a corporate ladder, because the corporate ladder might effectively be gone for large swaths of people.

A complete shift of the objects of ambition will require a significant cultural shift. Right now, we’re seeing some ominous cultural signals. Instead of uniting to demand fair distribution of AI’s benefits, society seems to be fracturing along new lines. There’s a palpable tension between “blue-collar” and “white-collar” narratives emerging. In political rhetoric, especially in the U.S., there’s a trend of valorizing traditional labor (manufacturing, trades) and expressing a kind of schadenfreude at the plight of professionals facing AI disruption.

For instance, recent political discourse has included ideas like eliminating taxes on tips and overtime pay (to favor service and hourly workers) while tech layoffs are met with a shrug. In 2025, President Trump even announced a plan to strip $3 billion in federal funds from Harvard and give it to trade schools, framing it as standing up for the working class against “elitist” institutions. At the same time, a Republican-backed budget proposal threatened to slash financial aid for hundreds of thousands of low-income college students. The message being sent is: “College is a scam, intellectual elites are suspect, and the real honest work is with your hands.” This resonates with a portion of the public because indeed the past decades did sideline many manufacturing workers, and there’s a corrective desire to uplift those undervalued jobs (which are absolutely essential).

Here’s the thing… celebrating the loss of white-collar jobs or turning society against the intellectual class is dangerous territory. History provides grisly examples of revolts against intellectuals – from China’s Cultural Revolution, where teachers and scholars were humiliated and beaten, to the Khmer Rouge’s atrocities in Cambodia, where merely wearing glasses (a proxy for being educated) could mark one for death. The current climate is nowhere near that extreme, of course, but the rhetoric of anti-intellectualism is rising. A 2019 Pew survey found that 59% of Republicans and right-leaning Americans believed colleges have a negative effect on the country, a huge shift from just a few years prior. The complaints ranged from universities being “too liberal” to not teaching useful skills. The result is a growing skepticism about the value of higher education and “brainy” careers.

Now, consider what happens if AI suddenly displaces a lot of those “brainy” workers – say many programmers, analysts, even middle managers. Rather than sympathy, we might see a societal response of “Good riddance, welcome back to the real world.” Already, we see memes celebrating tech layoffs or the idea of coders having to learn plumbing. This inversion – white-collar precarity alongside a romanticizing of blue-collar work – is something new in the modern era.

Why is this concerning? Because ideally, we want a society that values all contributions and doesn’t scapegoat any group. Intellectual and scientific labor is what drives innovation (including the AI we’re talking about), and a society that vilifies its thinkers is setting itself up to fall behind or make irrational decisions. Yes, Big Tech has a lot to answer for in terms of harms (more on that in the next section), and yes, practical trades deserve far more respect than they’ve gotten in recent decades. That being said, I don’t think the solution is to flip the hierarchy and dunk the nerds in the toilet – it’s to redefine value altogether.

Imagine a future where baseline living standards are guaranteed (through UBI or similar), and the distinction between “haves” and “have-nots” is not about income but about purpose and fulfillment. The new inequality could be between those who find meaningful ways to spend their time and engage with their communities, versus those who fall into aimlessness and despair. Social mobility might be measured not by moving from one income class to another, but by moving from isolation to community, or from consumer to creator.

To avoid sounding too utopian, let’s acknowledge the hard economic reality: the transition could be very rocky. In the near term, we could have both unemployment/underemployment AND labor shortages in certain areas. Paradoxically, even as AI automates jobs, demographics (aging populations) might mean there aren’t enough people for jobs in elder care, skilled trades, etc., which AI can’t fully do yet. Some people will be forced to switch careers, maybe from an office job to a hands-on caregiving or repair job, which is a hard adjustment especially if done mid-life. The question of “mobility” then becomes: can a displaced AI worker realistically jump to another field? Historically, moving between classes was tough; in the future, moving between professions might be the challenge, especially if specialized training is needed and our education system isn’t set up for rapid reskilling.

Let’s run through a thought experiment. Imagine you’re a white collar worker in 2025 - you may have nailed the AP tests, got the 4.0, secured an elite college degree, and grinded in your early career just to be replaced with an AI that was coded by a classmate you did a keg stand with. Now you are looking for a job that has yet to be replaced by AI (let’s say a lawnmowing job at a landscaping company). That job belongs to somebody, somebody who likely didn’t go to that same elite college and who may have taken it easy in school, but now you both are competing in the job market. The incumbent employee has experience, tenure, and perhaps even a newfound sense of ambition and purpose (and, if you were in their shoes, you’d want the opportunity to succeed even if you were slow to launch your career). But you, the formerly elite white collar employee, have more breadth and much greater accomplishments on paper... Sorry, but the boss doesn’t need you to run a SQL query, he needs you to mow grass. Who deserves the job? That will be a very difficult question to answer in the transitional years.

Policy responses could mitigate the inequality: for example, funding massive retraining programs, incentivizing sectors that AI can’t easily replace (like teaching, nursing, green energy installation), or even creating new categories of public service jobs. Some have suggested a federally guaranteed jobs program (if private sector doesn’t need you, the public sector could pay you to do useful community work). These ideas will likely gain traction if unemployment rises.

Finally, we must consider the global inequality aspect. We’ve been talking largely about advanced economies. In developing countries, a lot of employment is low-wage manual or routine cognitive work that could be automated – think garment factory workers, call center operators. If AI plus robotics reaches a point where those can be done with minimal human labor, it could undercut the entire development model of the last 50 years (where poorer countries climb the ladder by doing labor-intensive manufacturing or services for richer ones). This could cause a huge global upheaval, where nations that haven’t yet become rich find that the escalator has been turned off just as they stepped on. The “have vs have-not” issue will play out not just within countries but between them: AI could widen the gap between high-tech nations and those without tech infrastructure.

Without conscious effort, AI could create a New Gilded Age: extreme wealth for a few, basic subsistence for the masses, and an erosion of the dignity and purpose that work has provided for generations.

We stand at a crossroads. The choices we make in the next decade – about taxation of AI-driven wealth, about education and retraining, about social safety nets like UBI, and about how we culturally frame work and merit – will determine whether AI becomes a great equalizer or the greatest driver of inequality yet. The Age of the Composer shouldn’t just refer to a few clever generalists who know how to use AI; it must also mean we compose a new societal harmony, preventing dissonance between the tech elite and everyone else.

The Fall of Nerd Nation

In the late 2000s and early 2010s, “nerd culture” had a triumphant rise. Tech founders in hoodies were folk heroes, Silicon Valley was the land of innovation, and being brainy (especially in computing) became cool. I remember being drawn to Stanford in those years partly because of this aura – the idea that the geeks in flip flops had inherited the earth, and were using code and data to remake the world. The implicit social contract was: study hard, learn technical skills, and you’ll be rewarded both financially and with societal respect. This was the era of the Expert in its modern form – the software engineer, the data scientist, the quantitative analyst.

How quickly things change.

The very successes of “nerd nation” sowed the seeds of disillusionment. Big Tech’s shine began to dim in the late 2010s as the downsides of social media, surveillance capitalism, and digital misinformation came to light. The same companies that once championed an open, connected world became associated with privacy scandals, monopolistic practices, and content censorship or propaganda – often flipping their values whenever convenient. Recall how some social media platforms one moment promoted maximal “free speech,” then later, under pressure, aggressively moderated content – only to face backlash from the other side claiming bias (then we see Zuckerberg doing Jiu Jitsu and peeling back Meta’s content moderation standards). This inconsistency bred cynicism about tech leaders’ true principles (if they exist).

Now, as AI rises, it ironically threatens to dethrone many of the very people who created the digital revolution. A software engineer who spent 10 years at Google mastering a specific system might find an AI can code a better solution in a weekend. A content moderator or data labeler – already an underpaid “cog in the machine” – might find themselves simply not needed when the machine can learn from raw data with unsupervised techniques. Even highly paid roles like UX designer or product manager could be augmented or replaced by AI-driven tools that A/B test and refine interfaces automatically.

The white-collar workforce is feeling a new kind of insecurity that blue-collar workers knew all too well in the 1980s. And here’s the twist: there is a strain of public opinion that seems to be enjoying this comeuppance for the college-educated class. You see it in political rhetoric that pits “real hardworking Americans” against “elitist professionals,” and in the newfound focus on bringing back manufacturing or trade skills while deriding the liberal arts degree. President Trump’s ongoing “war on Harvard” exemplifies this. There’s also the populist refrain that “college is a scam.” Certainly, the cost of higher education and the student debt crisis lend some credence to that claim for many individuals – a college degree no longer guarantees a good job, yet can leave one with crushing debt. The value proposition of a traditional four-year college is under scrutiny. We’re seeing enrollment declines and more young people considering alternatives (coding bootcamps, trades, starting businesses, etc.). Some in media and politics cheer this, saying universities have become indoctrination factories and that we need more welders, electricians, nurses – practical jobs – instead of, say, more sociology majors.

It’s true we need more skilled trades and that not everyone needs to pursue academia, but I worry about the undercurrent of anti-intellectualism in some of this discourse. If AI is going to take over many tasks, what humans will uniquely contribute is precisely judgment, ethics, creativity, interdisciplinary thinking, etc. – all things nurtured by broad education and intellectual exploration. If we discourage an entire generation from higher learning and critical thinking training, we might be kneecapping our ability to adapt to the AI era wisely.

Historically, when societies turn against their intellectuals, it often precedes very dark times. Intellectuals can act as the conscience of society, the questioners of authority, and the source of new ideas to get out of crises. Losing trust in them wholesale is dangerous. And yet, we must also acknowledge that the intellectual/tech class did a lot to lose trust: they created social media platforms that did harm mental health and democracy (more on that soon), they often behaved arrogantly or seemed detached from the struggles of working-class people, and they accrued enormous wealth in the process. So the hate isn’t necessarily coming from nowhere.

The challenge ahead is bridging this divide. If we celebrate one group’s job losses and create a politics of revenge against “the smug tech bros” or “the ivory tower academics,” we’re cheering on societal self-harm.

Guess what: the billionaire class that truly runs things is probably not affected by this infighting – they might even benefit from it by redirecting anger away from themselves. While we fight culture wars over who’s more deserving – the coal miner or the coder – the yacht-dwelling billionaire who funded the AI that replaced both jobs sails on undisturbed, wealthier than ever.

In an AI-driven future, we need a coalition of both blue-collar and white-collar workers (and those entirely out of work) to demand a fair deal. We need society to value human beings – whether their work is with hands or with brains – and ensure technology benefits all. That means valuing intelligence and expertise, but coupling it with humility and empathy. It also means valuing manual and care labor far more than we have, recognizing the intelligence and skill in those, too.

To me, as someone who went through Stanford and believed in the “change the world with tech” mantra, this moment is humbling. I, too, face the disillusionment of applying to jobs I feel qualified for and getting nowhere – a feeling many have when the market shifts under their feet. It’s easy to become bitter and say “well, maybe all that schooling was pointless.” But I still firmly believe that education (in the broad, perspective-building sense) is incredibly important, perhaps even more so now. College (and learning in general) isn’t just about job training; it’s about expanding one’s understanding of the world, learning to think critically, encountering diverse people and ideas. Those are exactly the qualities we need in an era where AI can churn out endless information: we need people who can scrutinize, contextualize, and create meaning from that information.

One trend I notice is that even as some leaders downplay intellectuals, they still invest heavily in their own children’s education and in high-tech industries. It’s a bit of a contradiction: “College is a scam for you, but my kids will be going to top schools; white-collar jobs are overrated, but we will subsidize semiconductor factories requiring lots of engineers.” It suggests that at least some of this rhetoric is more about politics than genuine belief.

On an international scale, countries like China are heavily investing in AI, scientific research, and higher education. If Western nations embrace anti-intellectual populism too hard, they may simply fall behind in innovation. Even pragmatically, devaluing intelligence is risky.

Will society continue to value intelligence? I suspect we’ll see a bifurcation. In some circles, yes: people will deeply value those who have real expertise (especially as misinformation floods the world – experts might become rarer beacons of truth). In other circles, there may be a growing romanticism for tangible skills and distrust of abstract knowledge). Possibly, the pendulum will swing a bit towards valuing those who can do things AI can’t (e.g. applying emotional intelligence, physical craftsmanship, leadership, ethical judgment). As long as we don’t throw the baby out with the bathwater, that may be a healthy correction.

I hope we arrive at a more integrated value system: one that appreciates the truck driver and the data scientist, the poet and the plumber, recognizing that each brings something unique. In the Age of the Composer, the winners will be those who combine multiple domains of knowledge. That’s a strong argument for more education (formal or not), not less – but perhaps a different kind of education that includes hands-on skills, ethics, teamwork, and creativity, rather than siloed specialization.

It’s often said that innovation happens at the intersections of disciplines. The composer is literally standing at the intersection of music sections. That is basically the opposite of the caricature of the isolated, ivory-tower genius. It’s a more holistic intelligence, one that might be more palatable and obviously useful to society at large.

The era of blindly worshipping the tech elite is over – and that’s fine. We should replace it with respect for thoughtful, knowledgeable people who use their skills for good. We should absolutely uplift and invest in the jobs AI can’t replace (teachers, healthcare workers, trades), but we shouldn’t revel in the displacement of others or promote anti-intellectualism. We will need all hands and all minds on deck to navigate the AI revolution – the coal miner’s practicality, the coder’s analytical mind, the philosopher’s ethics, the artist’s creativity. If we start treating any of these as enemies, we’ll all lose.

The Future of Critical Thinking

We need to address an immediate concern: How will AI affect the brains of the next generation? If students can offload tedious tasks to AI (which on the face of it sounds great), will they still learn the underlying skills and critical thinking necessary to be effective “composers” in the future? Or will they become so dependent on AI that they lose the ability to solve problems from first principles?

Think of the classic scenario: a math student frustrated on a tough algebra problem. In the past, they might struggle, give up, come back, and finally have that “aha!” moment where it clicks. That process builds patience, logic, and resilience. Now imagine the same student can simply ask an AI to solve it. Poof, answer given. Why struggle? The temptation to alleviate frustration with a quick AI fix is huge. But what’s lost is not just knowledge of that algebra problem – it’s the practice of thinking hard.

We saw a smaller version of this with calculators: once calculators became common, educators stopped emphasizing manual arithmetic. That was fine to an extent (who really needs to do long division by hand daily?) as long as students still learned concepts and how to set up problems. But even with calculators, teachers insisted students learn the basics first (like multiplication tables) to have number sense. With AI, the “basics” of many fields (writing, researching, coding, drawing) could be bypassed. What do we insist students still learn by heart, and what do we let AI handle? This is a pedagogical debate happening right now.

Some early evidence is worrisome: A recent study nicknamed the “Metacognitive Laziness” study found that students who had access to ChatGPT for an assignment indeed wrote somewhat better essays, but they did not learn the material any better. They engaged less with the source texts and focused more on interacting with the AI. The researchers observed that these students were prone to copy-pasting AI output and skipping the deeper cognitive tasks. They termed it exactly that: potential metacognitive laziness – an over-reliance on AI such that the students were offloading thinking and not engaging in synthesis or analysis themselves. In contrast, students who had only minimal help (like a checklist) actually paid more attention to the content and remained more motivated.

This doesn’t mean AI will make us lazy thinkers, but it’s a clear warning: if we allow unstructured use of AI in learning, students might get through assignments without truly understanding them. Over time, could this produce a generation of people who expect answers on-demand and have never built the mental muscles for difficult reasoning? Imagine trying to problem-solve in a novel situation but you’ve never actually struggled through problems before – you might not know where to start.

Don’t worry - there’s a silver lining to this. AI has tremendous promise as a teaching tool if used correctly. Adaptive learning systems, AI tutors, and personalized feedback could revolutionize education in a positive way. For example, an AI tutor doesn’t get tired or frustrated; it can explain a concept 10 different ways until the student gets it, something a single human teacher with 30 students can’t do. There was an experiment at Stanford where an AI, “Tutor CoPilot”, helped human tutors by suggesting guiding questions to ask students. The result was improved student performance in math and it particularly helped less-experienced tutors become more effective. AI can augment educators, essentially making average teachers into master teachers by providing real-time suggestions and insights. This is augmentation at its best. It could tailor the pace: speeding through what you grasp easily, and giving extra practice where you struggle, a dream for differentiated instruction. This could democratize education in a profound way. Think about it: a student anywhere in the world with an internet connection could have access to a personal tutor for almost any subject, at any time, perfectly tailored to their interests, background, and cultural context. That’s hugely powerful for closing equity gaps.

The ideal scenario is that AI handles the rote parts of learning and assessment (grading quizzes, providing example problems, etc.) while teachers and students focus on discussion, big ideas, and hands-on projects. Instead of writing a boilerplate five-paragraph essay (which an AI can do), maybe students spend more time brainstorming original story ideas, or doing field experiments, or learning to ask good questions. In other words, education could shift to emphasize critical thinking, creativity, and the human touch.

For this to happen, we need to consciously redesign curricula and evaluation methods. If we just keep assigning the same tasks as pre-AI and expect students not to use AI, that’s both naive and misses the opportunity. We likely need to incorporate AI into learning and create new tasks that require human insight. For example, instead of a generic essay on a known topic (which an AI can generate), assignments might involve personal reflection, local community research, or interactive presentations – things that are unique to the student’s experience or require real-world engagement. Oral exams and one-on-one interviews might make a comeback, since you can’t as easily fake those with AI.

There’s also a case to be made for teaching more philosophy and ethics in a world where factual recall is less important. If any student can ask “What’s the summary of Hamlet?” and get a coherent answer from ChatGPT, maybe the value is in discussing “Do Hamlet’s dilemmas still apply today?” or “What would you do in his place?” – questions with no clear right answer, which require the student to form an opinion and support it. Those kinds of critical thinking and communication skills become even more paramount.

I also think resilience and frustration tolerance should intentionally be cultivated. We might use AI in education but also have “AI-free” exercises that simulate challenge. It’s like how athletes train with resistance to get stronger; maybe sometimes we don’t use the calculator or the AI, just to practice doing it the hard way and learn from that process. There could even be educational games where AI is limited or acts as an opponent rather than a helper, to push students’ problem-solving.

The broader point is: we risk a crutch, but we also have a prosthetic. Used poorly, AI will be a mental crutch that stunts development. Used wisely, it’s like a prosthetic that extends our mental reach. A calculator doesn’t make you bad at math if you first learned the concepts – instead it allows you to do more complex problems faster. Similarly, a student who knows how to think could use AI to test more ideas, gather information rapidly, and explore more territory. They have to remain in charge of the process, though, not just passively accepting AI output.

This is why critical thinking (the ability to question, verify, and contextualize information) is more important than ever. In an era of deepfakes and AI-generated misinformation, teaching students to ask “Where is this information from? Is it credible? What might be missing?” is vital.

Somewhere between the sticky notes and the whiteboards at Stanford’s d.school, I was baptized in design thinking. One of the key steps is Define the Problem (deeply understanding the issue before jumping to solutions). I suspect education will need to focus on that: how to formulate good questions and problems. If you can’t do that, AI might not help you because you won’t know what to even ask it for. In contrast, a student who learns to think in systems, empathize with users/people, and frame challenges will wield AI like a power tool, cutting through obstacles, whereas someone without those skills might be sitting idle with a powerful tool but no idea what to build.

Yearning for the Mines

What is the role of work in human life?

For centuries, many of us have derived purpose, identity, and structure from our jobs or crafts. The old Protestant work ethic framed work as a virtue in itself, almost a spiritual duty. Modern secular society still often equates having a career with having a meaningful life. We fear idleness – think of the warnings that unemployment leads to vice (the “bum” trope), or the stereotype of the retiree who dies shortly after stopping work because they “lost their purpose.”

Is this need for work intrinsic to humans, or is it a product of a society where survival depended on labor and where we’ve been culturally programmed to equate productivity with worth? If AI and automation reduce the need for human labor dramatically, we may collectively undergo a sort of existential withdrawal. Will we feel relieved and liberated, or anxious and aimless?

One of my favorite phrases, “I yearn for the mines”, captures an interesting paradox: after generations of laboring and even complaining about work, would we actually miss work if it was gone? There’s a story often told about prisoners or soldiers who, after being in a regimented environment for years, struggle with freedom and sometimes even commit a petty crime just to go back to the structure of prison or the military. Similarly, some workers who spent decades in tough jobs find themselves in retirement itching to do something – some even take up new jobs or volunteer because endless leisure wasn’t what they hoped.

A vivid fictional exploration of this idea comes from Ian McEwan’s novel Machines Like Me. He imagined a near-future where AI and automation provide abundant free time, and wondered what people would do: “We could become slaves of time without purpose. Then what? A general renaissance... love, friendship, philosophy, art and science... But genteel recreations wouldn’t be for everyone. Violent crime had its attractions too... gambling, drink and drugs, even boredom and depression.” In other words, some might thrive in self-directed leisure, but many might flounder. McEwan’s character muses that without the daily purpose of work, society could split into those who elevate themselves and those who descend into destructive behaviors.

There’s psychological evidence backing this concern. Research has shown that unemployment often correlates with depression, loss of self-esteem, and a sense of social isolation. It’s not just about money; it’s about feeling needed and useful. If that’s true for most people, a world with vastly less work could be a very unhappy world – unless we redefine “work” to include other purposeful activities. We might need to distinguish work as a job (to earn money) from work as a purposeful activity. If AI handles all the necessary jobs, humans could still engage in work-like pursuits – creative projects, raising families, learning, hobbies, community service – which might fulfill the same psychological needs. Historically, leisure has often been the privilege of the rich, and they found purpose in things like arts, philanthropy, or politics (think of aristocrats who became patrons of science or built museums). If AI makes us all “rich” in the sense of having time, we all might need to become a bit like Renaissance patrons or polymaths, cultivating interests and talents.

One hope of mine is that creative and performing arts might surge in value. Consider this: If an AI can generate 10,000 technically perfect paintings in the style of Da Vinci, the market will be flooded with flawless images. They might be beautiful, but knowing they were generated by an algorithm in seconds could make them less meaningful to us. Meanwhile, a single painting by a human, with a human story behind it, might become even more cherished – because it’s human. We already see glimmers of this: there have been studies where people were shown artworks and told either a human made it or an AI made it (when in fact all were AI-made), and people consistently rated the “human-made” ones as more appealing or valuable. We inherently value the “imbuement of human experience” in art. When the same painting was labeled as AI-created, people liked it less than when it was labeled human-created – suggesting we have a bias favoring art we believe came from human effort and creativity. The knowledge that a real person with feelings and struggles made something adds a layer of profundity.

More great examples of human-created art here.

This implies that in a future saturated with perfect machine outputs, the scarcity will be authentic human creation. Live theater might become more valued than Hollywood CGI spectacles, handmade crafts might command a premium over 3D-printed products, and live human-to-human services (like guided tours, personal coaching, artisan experiences) might flourish even as automated options exist. Think about vinyl record enthusiasts in the age of digital music – they cherish the tangible, imperfect, human aspect of the older medium. Similarly, we might see a countercultural trend of “human-made” as a luxury or at least a mark of authenticity.

Another area is physical experiences and sports. We could have robot athletes that outperform humans in every way, but we still watch the Olympics and not Robot Wars (well, some watch both, but the human drama of sports is irreplaceable). The narratives of human struggle, triumph, and even failure are what inspire us, not just the raw performance. Even if a robot could run a perfect football strategy, people will still fill stadiums to watch humans play, precisely because they know how hard it is and that not everyone can do it.

This touches on a key point: we value things in part because of their difficulty and exclusivity. If AI makes everything easy, then what’s left to admire? Perhaps only those things that remain hard for humans despite AI. Creativity, empathy, and original thought might become the “sports” of the future – the domains where we push ourselves because we want the challenge, even if AI could do it for us. In education, for instance, one could cheat on every assignment with AI, but maybe we’ll create new kinds of problems that require personal insight or physical presence to solve, just so that students still have to cultivate their minds and critical thinking skills.

It’s worth asking, were we always just working because of capitalism forcing us, or do humans inherently seek challenge? I suspect it’s both. Many people through history would not have chosen to work 14-hour days in coal mines if not desperately needed to feed their families. If freed from that, they likely wouldn’t “yearn for the mines” specifically, but they might yearn for belonging (the camaraderie of the workplace), for routine, and for the pride of contributing. Those aspects we’ll need to replicate in other ways.

Some thinkers propose that we’ll shift to a “post-work” society where creativity, learning, and play (one of my favorite concepts) become the center of human lives. Perhaps a basic civic expectation will be that you engage in your community or pursue an art or help care for others – not because you need a paycheck, but because that’s how you earn social esteem and keep yourself occupied meaningfully. It’s interesting to note that in affluent communities today where people don’t need to work (say, among some retirees or wealthy individuals), many do philanthropic or community activities. It’s as if we instinctively find something to do that feels useful.

That said, there’s also a risk of boredom and mental health crises. Already, we see how aimless scrolling on social media, a form of attempting to fill leisure time, can make people miserable. With unlimited leisure, some might become “slaves of time without purpose,” drifting in virtual reality or substance use. We absolutely will need to address mental health and provide avenues for people to find purpose. This could be a huge role for education and community organizations in the future: teaching people how to cultivate their passions, how to structure their day in self-driven ways, and how to find meaning outside of a job title.

One scenario is that new forms of “work” emerge that we don’t currently define as work. For example, being a good neighbor or caring for family might become more recognized and even incentivized. Some countries are already experimenting with stipends for caregiving or volunteering. In a post-AI society, perhaps volunteering to mentor youth or help in environmental projects could come with social credits (a bit unsettling, but pragmatic) or just community appreciation that fulfill the role a paycheck + boss’s approval used to.

My vision for our future with AI is an optimistic one:

One where we are essentially living like ultra high-tech villagers: small communities where the “plumbing” of society is handled by invisible AI infrastructure (plentiful energy, automated agriculture, AI-driven medical research, etc.), and people are free to do “villager” things (gather, create, perform, exchange ideas) supported by technology but not dominated by corporate work structures. It’s like a return to a pre-industrial lifestyle in terms of social relations, but with the benefits of advanced civilization (medicine, global knowledge, etc.). This resonates with a sort of paradoxical thought: maybe AI will force us to refocus on the human. When Netflix and endless video games were new, they were novel and addictive; but if entertainment is completely saturated by AI content, perhaps going to a real local theater or a live concert in the park becomes more appealing again. If cheap manufactured goods are all generic, perhaps handmade local art gains value. In that sense, AI could indirectly spur a cultural renaissance of human creative expression, valued precisely because it is ephemeral and authentic (it’s not a coincidence that this website’s namesake is Ephemera).

This optimistic vision requires that we survive the transition with social fabric intact. It will require intentional effort to build communities and give people opportunities to contribute outside traditional jobs. It may also require a psychological shift: learning to see leisure not as laziness but as an opportunity for growth. We might even resurrect old philosophies – Aristotelian ideas of leisure as the basis of culture, or the concept of “ikigai”.

No matter what, it is essential that we don’t allow the same tech barons who disrupted our social fabric with one technology to simply dictate the terms of the next.

A Human-Centric Future

In an AI-saturated world, the human role shifts from performer to composer, from worker ant to solution architect. What might society look like once this shift matures, and how do we ensure it’s a symphony and not a cacophony?

Picture a day in the life in, say, 2040 (if we navigate things well): You wake up in a community where AI manages the background infrastructure – energy, transportation, basic services – so efficiently you barely notice it. Your fridge is stocked via automated deliveries, electricity is cheap and green thanks to AI-optimized grids. You don’t have a “job” in the old sense to rush off to. Instead, you might have a project you’re leading – perhaps coordinating a neighborhood garden initiative, or contributing to an open-source medical research effort, or crafting bespoke furniture by hand (which your neighbors value because it’s uniquely human-made). Others in your community are doing their own projects, and many involve gathering in person: maybe there’s a morning workshop at the community center to learn a new craft or language (with AI tutors assisting as needed). In the afternoon, there’s a local theater performance, written by a local playwright, and even though an AI could have generated a play, everyone shows up because it’s their neighbors on stage, expressing real emotions.

Most importantly, people have time for each other. With AI taking care of the mundane daily tasks and providing abundance, people spend more hours per week in social activities, strengthening the bonds that frayed during the 21st-century social media era. After the lonely, polarized decades of Facebook and TikTok, society discovered that nothing replaces face-to-face community (shoutout my modern UI-based attempt at addressing this). In a twist of fate, AI’s advent made us realize we had to double down on humanity. Social media, which promised connection but often delivered isolation and division, was a lesson. We won’t make that mistake again with AI. This time, we put guardrails and ethics around the tech.

That scenario sounds idyllic, but it requires learning from our recent past. Let’s be frank: the tech industry ran fast and broke things – including trust, privacy, and in some cases social cohesion. Social media platforms, in pursuit of engagement, spied on us, harvested our personal data, sold it to advertisers or political manipulators, amplified propaganda, and gave many of us body-image issues or anxiety. Each of those problems is well-documented: Facebook’s data was exploited to “target inner demons” of voters in election campaigns (a whistleblower’s words on the Cambridge Analytica scandal, where 50 million profiles were harvested without consent); Russian operatives used social platforms for “information warfare” to destabilize elections; Instagram’s own internal research showed it makes one in three teen girls feel worse about their bodies, exacerbating anxiety and depression. These are not the outcomes we signed up for when we embraced new technology. They were the results of leaving a powerful innovation in the hands of a few unregulated entities whose incentives didn’t align with the public good.

Now with AI, we cannot afford to repeat that. The stakes are even higher. Social media, for all its impact, mainly influenced our thoughts and feelings. AI will influence every aspect of material life – jobs, safety, geopolitics. If we let the same pattern happen, a “Wild West” of AI where a small elite deploys it however they profit, and only later we scramble to contain the damage, the consequences could be far worse. We might end up with AI-driven surveillance states or autonomous weapons proliferation or algorithms controlling financial systems with no accountability. Inequality could skyrocket to a breaking point.

It’s imperative that new, more grounded and regulated players come to the forefront in AI development. Governments, international organizations, and academia need to be involved, not just Big Tech. There are encouraging signs: discussions about AI ethics and regulation are happening worldwide. The EU is pushing an AI Act to impose safety and transparency requirements. Researchers are calling for audits of AI models. Even some tech CEOs are asking for regulation (and I trust that they have an insider understanding of how disruptive this could be).

At the d.school, we were taught to start with empathy – understand the real human needs before devising a solution – and to iterate solutions with the user in mind at every step. The mantra is to be solution-agnostic: don’t wed yourself to a specific technology or idea; instead, keep your eyes on the need or problem, and be willing to use whatever tool or approach serves it best. This mindset is exactly what the Age of the Composer calls for.

A composer doesn’t force an oboe to play where a flute is needed; they choose instruments based on the piece they want to create. Similarly, a human-centered designer in the AI era will not use AI just because it’s cool; they will use it because it actually solves the problem for people – or they will deliberately not use AI in areas where a human touch is more beneficial.

I foresee that the most successful “composers” of products and services in the coming years will be those who blend technical knowledge with deep empathy and ethics. They will ask: “Is this AI product truly helping people? What unintended side effects could it have? How do we make it equitable and accessible?” These are questions the original social media disruptors didn’t ask until too late. The new generation of creators must ask them from day one.

Personally, as someone who straddles the world of technology and human-centered design, I feel optimistic that we can create an AI-integrated society that is more humane than the one we have now. Think about healthcare: AI might crunch data and suggest treatments, but the final mile could be a human doctor using those insights to connect with a patient in a compassionate, personalized way – something that’s been eroded by assembly-line medicine now. Think city planning: AI simulates endless scenarios for housing and transit, but city councils use those to make informed decisions that reflect community values, not just efficiency.

In each case, the composer is a human (or a group of humans) orchestrating AI and non-AI elements to achieve a goal that humans actually care about. Productivity in this future won’t be measured just in GDP or widget output, but in well-being, in resilience, in creativity. Progress might be redefined: not just technological advancement for its own sake, but advancement in quality of life and fairness. AI can help in areas like climate change (optimizing energy, discovering new materials for carbon capture) and medicine (designing drugs, personalizing treatments) – these will be high priority missions for humanity and AI to work on together. We’ll likely focus our most powerful AI resources on these big collective challenges, because we’ll realize that’s where they yield true public value.

Yes, there will be constraints – energy being one. Training giant AI models consumes vast electricity. If we hit limits or choose to limit for environmental reasons, we might have to prioritize: do we use our best compute on generating ultra-realistic video game graphics, or on curing diseases? The hopeful scenario is we choose wisely, guided by democratic input and ethical frameworks.

Even in the optimistic future, there will be power struggles, there will be those trying to misuse AI. I’m heartened by how much awareness exists now. Unlike with social media in 2010, people are already talking in 2025 about AI risks, ethics, and inclusion. We have a chance to course-correct early. The key will be involving diverse voices - not just engineers, but psychologists, sociologists, artists, and ordinary citizens – in shaping AI’s role. That way, the “composers” of the AI age are not a narrow priesthood but a chorus of humanity.

The AI Age is not about the end of humans – it’s about the resurgence of what makes us human.

In the orchestra of the future, it will be the human composers who decide what music we play and whether it’s a requiem for lost humanity or a bold new symphony celebrating life.

We have the baton. We must compose wisely.

Next
Next

Is Punk Dead?