↩ Accueil

Vue normale

index.feed.received.today — 13 mars 2025
index.feed.received.yesterday — 12 mars 2025

Lost in the mirror: as AI development gathers momentum, will it reflect humanity’s best or worst attributes?

12 mars 2025 à 12:00

Are we at risk of losing ourselves in the midst of technological advancement? Could the tools we build to reflect our intelligence start distorting our very sense of self? Artificial intelligence (AI) is a technological advancement with huge ethical implications, and in The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking, Shannon Vallor offers a philosopher’s perspective on this vital question.

Vallor, who is based at the University of Edinburgh in the UK, argues that artificial intelligence is not just reshaping society but is also subtly rewriting our relationship with knowledge and autonomy. She even goes as far as to say, “Today’s AI mirrors tell us what it is to be human – what we prioritize, find good, beautiful or worth our attention.”

Vallor employs the metaphor of AI as a mirror – a device that reflects human intelligence but lacks independent creativity. According to her, AI systems, which rely on curated sets of training data, cannot truly innovate or solve new challenges. Instead, they mirror our collective past, reflecting entrenched biases and limiting our ability to address unprecedented global problems like climate change. Therefore, unless we carefully consider how we build and use AI, it risks stalling human progress by locking us into patterns of the past.

The book explores how humanity’s evolving relationship with technology – from mechanical automata and steam engines to robotics and cloud computing – has shaped the development of AI. Vallor grounds readers in what AI is and, crucially, what it is not. As she explains, while AI systems appear to “think”, they are fundamentally tools designed to process and mimic human-generated data.

The book’s philosophical underpinnings are enriched by Vallor’s background in the humanities and her ethical expertise. She draws on myths, such as the story of Narcissus, who met a tragic end after being captivated by his reflection, to illustrate the dangers of AI. She gives as an example the effect that AI social-media filters have on the propagation and domination of Western beauty standards.

Vallor also explores the long history of literature grappling with artificial intelligence, self-awareness and what it truly means to be human. These fictional works, which include Do Androids Dream of Electric Sheep? by Philip K Dick, are used not just as examples but as tools to explore the complex relationship between humanity and AI. The emphasis on the ties between AI and popular culture results in writing that is both accessible and profound, deftly weaving complex ideas into a narrative that engages readers from all backgrounds.

One area where I find Vallor’s conclusions contentious is her vision for AI in augmenting science communication and learning. She argues that our current strategies for science communication are inadequate and that improving public and student access to reliable information is critical. In her words: “Training new armies of science communicators is an option, but a less prudent use of scarce public funds than conducting vital research itself. This is one area where AI mirrors will be useful in the future.”

Science communication and teaching are about more than simply summarising papers or presenting data; they require human connection to contextualize findings and make them accessible to broad audiences

In my opinion, this statement warrants significant scrutiny. Science communication and teaching are about more than simply summarising papers or presenting data; they require human connection to contextualize findings and make them accessible to broad audiences. While public distrust of experts is a legitimate issue, delegating science communication to AI risks exacerbating the problem.

AI’s lack of genuine understanding, combined with its susceptibility to bias and detachment from human nuance, could further erode trust and deepen the disconnect between science and society. Vallor’s optimism in this context feels misplaced. AI, as it currently stands, is ill-suited to bridge the gaps that good science communication seeks to address.

Despite its generally critical tone, The AI Mirror is far from a technophobic manifesto. Vallor’s insights are ultimately hopeful, offering a blueprint for reclaiming technology as a tool for human advancement. She advocates for transparency, accountability, and a profound shift in economic and social priorities. Rather than building AI systems to mimic human behaviour, she argues, we should design them to amplify our best qualities – creativity, empathy and moral reasoning – while acknowledging the risk that this technology will devalue these talents as well as amplify them.

The AI Mirror is essential reading for anyone concerned about the future of artificial intelligence and its impact on humanity. Vallor’s arguments are rigorous yet accessible, drawing from philosophy, history and contemporary AI research. She challenges readers to see AI not as a technological inevitability but as a cultural force that we must actively shape.

Her emphasis on the need for a “new language of virtue” for the AI age warrants consideration, particularly in her call to resist the seductive pull of efficiency and automation at the expense of humanity. Vallor argues that as AI systems increasingly influence decision-making in society, we must cultivate a vocabulary of ethical engagement that goes beyond simplistic notions of utility and optimization. As she puts it: “We face a stark choice in building AI technologies. We can use them to strengthen our humane virtues, sustaining and extending our collective capabilities to live wisely and well. By this path, we can still salvage a shared future for human flourishing.”

Vallor’s final call to action is clear: we must stop passively gazing into the AI mirror and start reshaping it to serve humanity’s highest virtues, rather than its worst instincts. If AI is a mirror, then we must decide what kind of reflection we want to see.

The post Lost in the mirror: as AI development gathers momentum, will it reflect humanity’s best or worst attributes? appeared first on Physics World.

index.feed.received.before_yesterday

Inside Google’s Investment in Anthropic

11 mars 2025 à 21:43
The internet giant owns 14% of the high-profile artificial intelligence company, according to legal filings obtained by The New York Times.

© Marissa Leshnov for The New York Times

Google’s stake in Anthropic is capped at 15 percent, and it holds no voting rights, board seats or board observer rights in the start-up.
❌