Brave New World of AI—that’s the glittering promise tech leaders have been dangling before us, a realm where artificial intelligence revolutionizes everything from daily tasks to global economies.
Yet, recent blunders suggest this brave new world might instead resemble a chaotic wasteland, riddled with errors, ethical oversights, and unfulfilled hype.
Atlassian billionaire Scott Farquhar says creatives should give up copyright for the common good. But would he do the same? Here’s more:
— The Australian (@australian) August 15, 2025
A series of embarrassing setbacks has cemented AI’s reputation as a modern-day equivalent to the 17th-century tulip bubble, where excitement far outpaced reality.
For instance, in July, the latest update to Elon Musk’s AI chatbot, Grok, resulted in it disseminating harmful content, including antisemitic remarks, raising serious concerns about content moderation in advanced systems.
Tech billionaire Scott Farquhar has advocated for creators, who often earn modest incomes around $18,000 annually, to permit AI firms to utilize their intellectual property without compensation or permission, highlighting a stark divide between innovation and fair remuneration.
This month saw the troubled rollout of OpenAI’s ChatGPT-5, touted by CEO Sam Altman as akin to consulting a doctoral expert on any topic.
However, shortly after release, users reported glaring mistakes, such as a generated U.S. map featuring fictional states like Aphadris, Wiscubsjia, and Misfrani. The model also struggled with basic arithmetic like counting to 12 and misspelled historical figures, referring to “President Gearge Washingion.
” While amusing at first glance, these glitches exacerbate the flood of digital falsehoods, ultimately eroding public trust in information sources.
Despite these issues, industry voices insist the AI revolution is accelerating toward a brave new world, urging everyone to join or face obsolescence.
You May Also Like:
Openai CEO’s Alarming Warning: AI Market Bubble Poised to Explode
Humanoid Robot Games Falter: Robots Tumble in China
From an additional perspective, this narrative overlooks how such rapid advancement might widen socioeconomic gaps, leaving behind those without access to cutting-edge tools.
Regrettably, those navigating this course seem to have misplaced their ethical bearings, leaving the destination shrouded in uncertainty.
In Australia, the absence of dedicated AI laws persists, a situation favored by Scott Farquhar, who chairs the Tech Council of Australia and warns against restrictive policies that could stifle progress.
The council advocates for amendments to copyright laws, specifically an exemption for text and data mining, enabling AI developers to train models on protected materials sans approval or payment.
This stance paradoxically deems artists’ contributions priceless yet monetarily worthless, sparking debates on intellectual property rights in the digital age.
At a recent economic forum, the group lobbied for expanded data center infrastructure in Australia—massive, resource-intensive complexes demanding enormous electricity and water for AI operations. Farquhar envisions Australia as a key regional player in this sector, positing that it could reap substantial rewards from AI adoption and position the nation to capture associated gains.
Adding a layer of scrutiny, Farquhar and his spouse, through their investment firm Skip Capital, hold stakes in Stack Infrastructure, a data center developer.
This personal financial interest underscores potential conflicts in promoting such expansions, prompting questions about impartiality in policy advocacy.
The agenda further calls for streamlined approvals for energy projects and data facilities, echoing the U.S.’s recent AI strategy under the Trump administration, which prioritizes deregulation to boost innovation.
This approach sidesteps oversight on monopolistic practices, tech audits, false claims, and ecological safeguards, potentially accelerating growth at the expense of accountability.
Critics argue this ignores calls for caution in a sector flush with nearly $1 trillion in funding yet plagued by exaggerated claims and subpar results. As tech journalist Matteo Wong noted in The Atlantic, AI offerings continue to be prone to mistakes, costly to develop, and largely untested in practical business scenarios.
Prominent AI skeptic Gary Marcus, a retired professor from New York University specializing in psychology and neural sciences, has lambasted the industry for peddling unsubstantiated hype over years. He doubts the ability of these tech visionaries to fulfill their pledges and advocates for stringent oversight—a necessity amplified by OpenAI’s internal turmoil less than two years ago, when board members attempted to remove Altman over safety concerns related to AI’s potential harms.
Currently, Australia’s government is reviewing feedback on copyright reforms, with deadlines approaching, alongside forming an advisory panel for ethical AI deployment. Looking at a wider perspective, examples from other countries, like Europe’s tougher AI regulations, could provide useful models that focus on developing technology with people in mind rather than
It’s crucial for policymakers to pierce through the veil of enthusiasm enveloping AI. Without proper safeguards, the brave new world on the horizon risks devolving into a brave new world of disarray and regret, far from the utopia advertised.
Disclaimer: This article synthesizes highlights from reports that may include unverified elements as of August 20, 2025. Readers should cross-check information with original sources for accuracy.
Sources:













