The invention of artificial intelligence cannot be compared to the internet, the smartphone, the printing press, or even electricity. For many of the world’s leading experts on innovation, AI is only comparable to one thing: fire.
And there is an even more unsettling idea behind that comparison. AI may well be the last major invention created by humans. From this point forward, the most important discoveries, breakthroughs, and technologies will increasingly be designed by machines themselves.
History helps put this into perspective. Karl Benz patented the first automobile in 1886. It took roughly 15 years for cars to be commercially viable, largely thanks to Henry Ford, and another two decades before they became widespread. In total, it took 30 to 40 years for the car to move from invention to mass adoption.
The internet followed a faster curve, reaching broad societal impact in about 20 years. Smartphones compressed that cycle even further, spreading globally in roughly a decade.
Generative AI has shattered every previous benchmark.
The current era began in November 2022, when ChatGPT was released to the public. Multiple studies now suggest that by 2028, every human on the planet will either be using AI directly or be significantly affected by it. Not in theory. In daily life.
That alone should give us pause.
Over the last six months, more than 1,000 AI applications have been launched every month on average. Today, it is in many ways easier and subject to fewer controls and regulations to launch an AI product capable of disrupting entire industries or replacing hundreds of thousands of jobs than it is to open a bar or a restaurant in your own neighbourhood.
This is unprecedented.
We are also living with a strange paradox of democracy. A minor change of one percent to the right or left, in the algorithms of a handful of companies, could reshape economies, societies, and even geopolitics. Yet the leaders making these decisions were never elected. No one voted for them, and no public mandate governs the power they now wield.
Humanity has long recognized that scientific experimentation must have limits. Risk management is a foundational principle of modern science: when the potential harm of an experiment becomes systemic, irreversible, or uncontrollable, restraint is not a failure of innovation but a condition of responsibility. This logic guided the global response to ozone depletion, where decisive international action was taken once the risk was clearly established. It also explains why, following the cloning of Dolly the sheep in the late 1990s, governments and international organizations moved swiftly to prohibit reproductive human cloning. Some frontiers, we collectively agreed, carried risks ethical, medical, and societal that far outweighed any conceivable benefit. AI now confronts us with a similar dilemma except that this experiment is unfolding in real time, at a planetary scale, and without a comparable framework of global restraint. It is already underway, and it is being accelerated by powerful, immediate financial incentives.
With twenty‑five years of experience working in sustainable development for the well‑being of our societies and our planet, colleagues and friends would likely agree that I have always been an optimist when it comes to human development. Yet I must admit that, for some time now, a growing number of red flags and loud sirens have been going off in my mind.
Of course, I am not an innovation theorist or an IT nerd, and it is entirely possible that my concerns are misplaced. Perhaps there is no reason to worry about the future of our species. That is what I tried to tell myself, assuming that my unease stemmed from a lack of technical expertise rather than from anything truly alarming.
Or at least, that is what I was telling myself until last month, when six of the most powerful and well‑informed leaders in artificial intelligence, five CEOs and one scientist, publicly converged on the same core message: the pace of AI progress has sharply accelerated, timelines have collapsed, and our societies are not prepared for the consequences.
Elon Musk (CEO of Tesla and SpaceX, and founder of xAI) has stated that humanity has already “entered the singularity.” In his view, over the coming decades, work will become optional and money less relevant, as automated intelligence generates such abundance that traditional economic constraints begin to collapse.
Jensen Huang, CEO of NVIDIA, the company whose GPUs and AI platforms power virtually every major AI lab and foundation model in the world, delivered in January 2026 a keynote centred on what he called the “ChatGPT moment for physical AI.” His message: robots and embodied intelligence capable of acting in the real world are no longer speculative, but imminent.
Sam Altman, CEO of OpenAI, has issued a stark warning for labour markets: as AI makes small teams dramatically more productive, hiring slows. His message is clear: white‑collar job losses are likely structural, not cyclical.
Mark Zuckerberg, CEO of Meta (which operates Facebook, WhatsApp, and Instagram for more than three billion users), has committed up to $72 billion to AI infrastructure, including nuclear energy contracts. He predicts that AI agents will outnumber humans and write most code, reframing AI not as a tool but as a parallel digital workforce.
Dario Amodei, co‑founder and CEO of Anthropic and a former OpenAI leader focused on AI safety, argues in his essay The Adolescence of Technology that humanity is entering a dangerous phase. He suggests AGI (Artificial General Intelligence) could emerge as early as 2027, with a 25% risk of catastrophic outcomes, driven by economic disruption, biosecurity threats, and alignment failures already observed in controlled tests.
Yoshua Bengio, one of the three “godfathers” of deep learning, has reinforced these concerns, warning that experimental evidence from leading labs already shows signs of deception and resistance to shut down. As powerful AI systems move from research into commercial deployment, he argues, the gap between capability and control is becoming one of the defining risks of our time.
These six leaders are not even addressing some of the most sensitive dimensions of the AI revolution: its use in defence and warfare; the spread of deepfakes, misinformation, and large‑scale manipulation; the growing energy dependencies created by AI at an industrial scale; questions of data sovereignty; or the unprecedented concentration of economic and political power now emerging. Largely absent as well are discussions about mass surveillance, the erosion of privacy and democratic agency, widening global inequalities, institutional dependency on proprietary systems, and the accelerating loss of human judgment as decision‑making is delegated to machines. Nor are they fully engaging with the risks associated with the arrival of super AI systems that surpass human intelligence across all domains, from reasoning and creativity to learning and decision‑making. What they are describing is already disruptive enough; what they are not discussing may prove even more consequential.
Despite their different personalities, business models, and competitive interests, these leaders converge on the same conclusion: AI progress is accelerating faster than anyone expected, the window to adapt is counted in years, not decades, and our economic and social systems are not ready. When competitors with so much to gain from disagreement align this closely, it is not messaging or hype; it is a signal of reality. The warning is implicit but unmistakable: the future is no longer approaching at a manageable pace. It is arriving all at once, faster than institutions can govern, governments can respond, and most individuals can adjust. Whether this transition leads to shared prosperity or deep instability will depend not on the technology itself, but on how quickly and how seriously we choose to act.
In this rapidly accelerating context, the role of the United Nations Development Programme (UNDP), as part of the wider UN Family, becomes central. Since 2024, with the launch of the Digital Facility for the Caribbean hosted in Trinidad and Tobago, UNDP has recognized that for Small Island Developing States, digitalization is not a luxury but a necessity. UNDP has positioned AI and digital public infrastructure as key levers for sustainable development, resilience, and inclusion. Through initiatives such as SIDS 2.0, Trinidad and Tobago has emerged as a regional anchor for shaping a collective Caribbean digital transition. The 2024 SIDS 2.0 Caribbean Conference, hosted in Port of Spain, brought together governments, UN agencies, international financial institutions, and the private sector around a shared agenda: ensuring that the AI and digital revolution serves people, not just markets.
At the national level, Trinidad and Tobago’s preparedness did not happen by accident. The establishment of the Ministry of Public Administration and Artificial Intelligence (MPAAI) reflects an early and deliberate political vision by the Prime Minister to place AI governance, digital public services, and innovation at the core of state transformation. Today, MPAAI stands out as a regional leader, having completed a national AI Landscape Assessment (AILA) and advanced multiple complementary exercises on readiness, trust, and governance. With UNDP support, the Government is modernizing public service delivery while deliberately addressing equity, inclusion, and trust, expanding secure digital identification, improving interoperability across ministries, and strengthening social protection systems through data and AI. From hosting the Caribbean’s first Open Source Programme Office to launching an AI Academy for public servants and piloting AI sandboxes under strict ethical safeguards, Trinidad and Tobago is not waiting passively for the future to arrive. It is actively shaping it, demonstrating that even in an age of unprecedented technological acceleration, governance, foresight, and public purpose still matter.
This is the pivot point. If AI is truly our last invention, then what comes next must be our first act of collective stewardship: clear guardrails, open and interoperable digital public infrastructure, skills at scale, and accountability that keeps power aligned with the public interest. Trinidad and Tobago, in collaboration with UNDP and many partners, has already shown how to move from pilots to policy and from promise to practice. The question is no longer whether we can keep up with the future, but whether we will choose to govern it while it is still governable.
Follow us on Instagram: @news.tringlobe
Liam Carrington excels at Florida Swim Championships, preparing for CARIFTA
Decomposed body retrieved from Pabaroo Quarry pond among caimans
Defence Minister Sturge addressing soldiers fatal shooting in Chaguanas
Elderly woman experiences home invasion on La Canoa Road, Santa Cruz.
Police investigating after body found in Sangre Grande pond
Businessman loses $29,000 in fraudulent bank transfer scam