A coastal property stand-off recently landed on my kitchen table. A friendโs parents own a modest beach bungalow in an aging communityโthe sort of place where brick pathways meander through palms like a 1970s Gold Coast development frozen in time, and not much has changed since Gough Whitlam was the Prime Minister. While gleaming high-rises have sprouted nearby over decades, the original buildings have persisted, protected by a rather brilliant governance structure requiring 80% owner agreement for any sale. Anyone whoโs ever chaired a body corporate meeting knows getting 80% of Australians to agree on anything is about as likely as finding a cold and empty spot on Bondi Beach in January.
But recently, a property developer made headway by targeting individual owners with attractive offers, then tabling a surprisingly generous proposal for the entire complex. The place erupted into debateโWas the offer fair dinkum? How might negotiations unfold? Who stood to benefit most?
To assist my friendโs parents through this property puzzle, I turned to ChatGPT 4.5โnot the free version that fumbles like a first-grade footballer, but the premium โproโ tier that costs about the same as a decent bottle of Lisa Mcguigan Silver Pinot Grigio each week. This version includes a โdeep researchโ capability allowing the AI to spend up to half an hour exploring online sources before synthesising findings. I requested an evaluation of the offer and, astonishingly, received a comprehensive analysis within three minutes. Over the course of our conversation, I refined my questions while the AI adjusted its assessment accordingly.
The verdict? The offer undervalued the property. The AI had uncovered comparable nearby sales commanding higher prices, including one property that had been zoned upwards post-purchase, dramatically increasing its development potential and true market value. The negotiation dynamics proved particularly fascinatingโthe AI outlined how developers might secure majority ownership to control the body corporate, then implement burdensome rules or special levies designed to pressure remaining owners to sell. Yet this strategy created vulnerability: โTheyโll own half a non-redevelopable complexโmeaning their investment sits in limboโ, it observed. โTheir financing partners will grow increasingly nervousโ. If just 21% of owners maintained their resolve, they could force developers to โbleed cashโ until a more generous offer materialised.
I forwarded this analysis to my friendโs parents with qualified enthusiasm. A property solicitor might have provided more nuanced counselโbut not in three minutes, and certainly not for $200. The AI contained a few factual errors regarding property dimensions, but immediately corrected them when I pointed them out. While I regularly use ChatGPT for various tasksโteaching myself about scientific concepts, configuring an old computer for a neighbourโs six-year-oldโs robotics projects, even experimenting with fan fiction based on profiles Iโve writtenโthis property consultation felt fundamentally different. Here was an AI helping solve a genuine, complex, financially significant problem with remarkable practicality and business acumen. The system demonstrated a savviness Iโd previously associated exclusively with human experience. Despite following AI developments closely for nearly two years, this moment landed differently. Strewth, I thought. โThis isnโt theoretical anymoreโitโs properly arrivedโ.
The complicated dance of technological scepticism
Most Australians donโt know quite how seriously to take artificial intelligence. This ambivalence stems partly from the technologyโs novelty and partly from the deafening hype surrounding it. Resisting the sales pitch makes sense when forecasting technological futures remains notoriously challenging. But the contrarian dismissal that inevitably follows overblown promises doesnโt necessarily illuminate matters either. In 1879, The Times published a multi-part front-page investigation titled โEDISONโS ELECTRIC LIGHTโCONFLICTING STATEMENTS AS TO ITS UTILITYโ. The paper quoted a distinguished engineerโpresident of the Stevens Institute of Technologyโwho objected to โtrumpeting the result of Edisonโs experiments in electric lighting as a wonderful successโ. His scepticism wasnโt unreasonable; inventors had failed to create functional light bulbs for decades. His anti-hype position would have proven correct in countless other situations.
AI hype has spawned two distinctive forms of counter-narrative. The first suggests the technology will soon reach its ceiling: perhaps AI will continue struggling with forward planning or explicit logical reasoning rather than intuitive pattern-matching. According to this perspective, we require additional breakthroughs before achieving what researchers term โartificial general intelligenceโ or AGIโroughly human-equivalent intellectual capability and autonomy. The second counter-narrative emphasises real-world implementation challenges: even if remarkably intelligent AI helps design a superior electrical grid, convincing people to build it represents an entirely different challenge. This view holds that progress inevitably encounters bottlenecks thatโto some peopleโs reliefโwill moderate AIโs integration into our social fabric.
These perspectives sound persuasive and encourage a comfortable wait-and-see attitude. Yet they find little support in โThe Scaling Era: An Oral History of AI, 2019-2025โ, a comprehensive and revealing collection of interview excerpts with AI insiders compiled by podcaster Dwarkesh Patel. This twenty-four-year-old interviewing phenomenon has built an impressive audience by posing detailed technical questions that most commentators wouldnโt know how to formulate. In โThe Scaling Eraโ, Patel weaves multiple interviews into a cohesive narrative of AIโs trajectory. The title references the โscaling hypothesisโโthe notion that simply making AI systems larger creates substantially greater intelligence. The evidence increasingly suggests this approach works.
Virtually no one interviewed in โThe Scaling Eraโโfrom corporate leaders like Mark Zuckerberg to frontline engineers and analystsโanticipates AI development plateauing. Quite the opposite: nearly everyone notes its surprisingly rapid improvement, with many predicting AGI could emerge by 2030 or earlier. Nor does societal complexity appear to discourage most experts. Many researchers express confidence that the next generation of AI systems, likely arriving within months, will enable widespread adoption of automated cognitive labour, initiating technological acceleration with profound economic and geopolitical implications.
The text-based nature of AI chatbots has made it relatively straightforward to envision applications in writing, legal work, education, customer service and other language-centred domains. Yet this isnโt necessarily where AI developers focus their primary attention. โOne of the first jobs to be automated is going to be an AI researcher or engineerโ, Leopold Aschenbrenner, formerly an alignment researcher at OpenAI, tells Patel. AschenbrennerโColumbia Universityโs valedictorian at nineteen in 2021, who mentions studying economic growth โin a previous lifeโโexplains that if technology companies assemble teams of AI โresearchersโ, and those researchers identify methods to enhance AI intelligence, the result could trigger an intelligence-feedback loop. โThings can start going very fastโ, Aschenbrenner warns. Automated researchers might expand into fields like robotics; if one nation establishes a lead over others in such capabilities, he argues, this โcould be decisive in, say, military competitionโ. He suggests we might eventually confront scenarios where governments contemplate launching missiles at data centres apparently approaching โsuperintelligenceโโAI substantially smarter than humans. โWeโre basically going to be in a position where weโre protecting data centres with the threat of nuclear retaliationโ, Aschenbrenner concludes. โMaybe that sounds kind of crazyโ.
This represents the most extreme scenarioโbut even conservative projections remain striking. Economist Tyler Cowen adopts a comparatively measured view: he favours the โlife is complicatedโ perspective and suggests our world contains numerous problems that remain unsolvable regardless of computational intelligence. He notes that researcher numbers have already increased globallyโโChina, India, and South Korea recently brought scientific talent into the world economyโโwithout creating profound, science-fiction-level technological transformation. Instead, Cowen anticipates AI might usher in innovation comparable to mid-twentieth century developments, when, as Patel characterises it, humanity progressed โfrom V2 rockets to the Moon landing in a couple of decadesโ. This might appear relatively restrainedโand compared to Aschenbrennerโs forecast, it certainly is. However, consider what those decades delivered: nuclear weapons, satellites, jet travel, the Green Revolution, computers, open-heart surgery, and DNA discovery.
Ilya Sutskever, former chief scientist at OpenAI, offers perhaps the most guarded perspective in the book; when Patel asks when he anticipates AGIโs arrival, Sutskever responds, โI hesitate to give you a numberโ. Patel therefore approaches the question differently, asking Sutskever how long AI might remain โvery economically valuable, letโs say, on the scale of airplanesโ before automating substantial portions of the economy. Sutskever, finding middle ground between Cowen and Aschenbrenner, suggests this transitional, AI-as-airplanes phase might constitute โa good multiyear chunk of timeโ that, in retrospect, โmay feel like it was only one or two yearsโ. Perhaps this resembles the period between 2007, when Apple introduced the iPhone, and approximately 2013, when smartphone ownership reached one billion peopleโexcept this time, the newly ubiquitous technology will possess sufficient intelligence to help us invent even more technologies.
The technology we cannot ignore
Itโs tempting to treat these perspectives as occupying their own separate reality, like watching a preview for a film youโll probably skip. After all, who truly knows what lies ahead? But actually, we understand quite a lot. AI already discusses and explains numerous subjects at doctoral level, predicts protein folding, programs computers, inflates cryptocurrency values, and much more. We can confidently predict significant improvement over coming yearsโwhile people continuously discover applications affecting how we live, work, discover, build and create. Questions persist regarding the technologyโs ultimate potential and whether, philosophically speaking, it genuinely โthinksโ or demonstrates creativity. Nevertheless, our mental model of the next decade or two must recognise that no plausible scenario exists where AI fades into irrelevance. The question concerns degrees of technological acceleration.
Even the CSIROโs Data61 division, our national science agencyโs digital research network, has identified AI as potentially contributing $22 trillion to the global economy by 2030. Here in Australia, the technology could add as much as $4 trillion to our economy over the next fifteen years, fundamentally transforming industries from mining to healthcare. These arenโt science fiction figuresโtheyโre conservative projections from some of our most credible research institutions. When Atlassianโs Mike Cannon-Brookes starts investing heavily in AI startups alongside traditional software ventures, savvy business leaders take notice.
โDegrees of technological accelerationโ might sound like an abstract concern for research scientists or Silicon Valley entrepreneurs sipping flat whites while contemplating disruption. Yet it fundamentally represents a political matter with implications for every Australian business, educational institution, and family kitchen table conversation. Ajeya Cotra, senior adviser at Open Philanthropy, articulates a โdream worldโ scenario featuring slower AI acceleration. In this world, โthe science is such that itโs not that easy to radically zoom through levels of intelligenceโ, she tells Patel. If the โAI-automating-AI loopโ develops gradually, she explains, โthen there are a lot of opportunities for society to both formally and culturally regulateโ artificial intelligence applications.
Cotra recognises this might not materialise. โI worry that a lot of powerful things will come really quicklyโ, she admits. The plausibility of concerning scenarios places AI researchers in an awkward position. They believe in the technologyโs potential and resist diminishing it; they harbour legitimate concerns about contributing to some version of an AI catastrophe; and they remain fascinated by speculative possibilities. This combination pushes AI discourse toward extremes. (โIf GPT-5 looks like it doesnโt blow peopleโs socks off, this is all voidโ, Jon Y, who produces the YouTube channel โAsianometryโ, tells Patel. โWeโre just ripping bong hitsโ.)
This framing suggests non-specialists need not participate, creating a cognitive dissonance reminiscent of how Australians sometimes approach bushfire planningโwe acknowledge the threat intellectually but postpone meaningful preparation until we smell smoke. Either AI fails, or it reinvents our world. Consequently, despite AIโs arrival, its implications remain primarily conceptualised by technical experts. Artificial intelligence will affect everyone from Macquarie Street policymakers to mum-and-dad small business owners in Wagga Wagga, yet an AI politics has barely materialised. Understandably, civil society remains preoccupied with political and social crises centred on Donald Trump; it appears to have limited bandwidth for the technological transformation about to engulf us. If we donโt engage with it, however, those creating the technology will single-handedly determine how it reshapes our lives.
These individuals possess undeniable brillianceโintellectual horsepower that would impress even the most hardened University of Melbourne computer science professor. Without disrespect, however, they arenโt representative of broader society. They possess particular skills, affinities and values shaped by specific cultural and professional environments. Their psychological orientation toward technologyโwhat we psychologists might term their โtechnological self-schemaโโdiffers markedly from most Australians. In one of Patelโs bookโs most revealing moments, he asks Sutskever what he plans to do after AGI emerges. Wonโt he feel dissatisfied living in some post-scarcity โretirement homeโ? โThe question of what Iโll be doing or others will be doing after AGI is very trickyโ, Sutskever responds. โWhere will people find meaning?โ He continues:
My sense is that people might actually be spending a lot of time interacting with the AI systems that were created, because the AI systems will be like people, except theyโll be a lot smarter. They wonโt have certain human flaws. So my sense is that, over time, people will find a lot of meaning in interacting with these systems because these systems will make them better on the inside.
Would most peopleโthose outside computer science who havenโt devoted their careers to creating AIโbelieve they might discover lifeโs purpose through conversing with one? Would most people think machines will make them โbetter on the insideโ? These perspectives arenโt inherently unreasonable (and might, surprisingly, prove accurate). But this doesnโt mean such worldviews should guide our technological future.
The challenge lies in articulating alternative visionsโperspectives that forcefully express what we want from AI and what we rejectโrequires serious, broadly humanistic intellectual work spanning politics, economics, psychology, art, and religion. Time for this work rapidly diminishes. Those outside AI development must now join the conversation.
What qualities do we value in people and society? Where should AI assist us, and when should it remain uninvolved? Will we consider AI successful or failing if it replaces schools with screens? What about substituting itself for established institutionsโuniversities, governments, professions? If AI becomes friend, confidant, or romantic partner, does it cross boundaries, and why? How might it affect our cognitive development, interpersonal relationships, and collective decision-making processes? Psychological research on human-computer interaction suggests that our relationships with intelligent machines involve complex attribution processes and emotional responses that merit deeper exploration (Nass & Moon, 2000).
Perhaps AIโs success might be measured by how effectively it restores balance to our politics and stability to our lives, or by how it strengthens institutions it might otherwise undermine. Perhaps its failure appears in how thoroughly it diminishes the value of human minds and freedom. The psychological concept of โtechnological self-efficacyโโour confidence in mastering and directing technological toolsโbecomes particularly relevant as systems grow increasingly autonomous.
For Australian organisations from the Commonwealth Bank to Woolworths, the coming transformation demands strategic foresight beyond quarterly planning cycles. Business leaders must develop what organisational psychologists call โanticipatory awarenessโโthe capacity to envision and prepare for disruptive change before it materialises fully.
Regardless, controlling AI requires debating and establishing new human values which, previously, we havenโt needed to specify. Otherwise, we surrender our future to individuals primarily concerned with whether their technology functions, and how quickly.