In recent years, the number of people using AI to find answers, rather than using traditional Google Search, has risen by a huge amount. The saying ‘just Google it' is being replaced with ‘ask ChatGPT'. The next frontier is politically controlled AI, with President Donald Trump announcing plans for ‘Truth Social AI', an alternative to Elon Musk's Grok, which is heavily used across the X platform.

Over on X, Grok has already proven to be politically aligned. After sending out a bunch of left-leaning replies that challenge Musk's right-wing views, Grok had been reprogrammed and when it came back, users found it spewing out replies referencing Hitler and delving into right wing conspiracy theories. In a world where more people than ever are seeking answers from AI, having one that actively promotes political ideology is dangerous. Grok had to be taken offline again while its newfound fascination with Hitler was fixed.
As AI becomes the dominant interface between people and information, the next logical step was politics getting involved. Politicians aren't just using AI, they are building their own. Donald Trump announced this month that his Truth Social platform will debut Truth Social AI. It's not just politicians like Trump showing interest in AI either, here in the UK, the Labour government has put together an action-plan for AI growth in the country.
The core issue for the future? When every political or cultural group starts building its own AI model trained on selectively curated data to match a belief system, we could descend even further into a post-fact and post-truth environment.
The dangers of society's over-reliance on social media for information became glaringly obvious a decade ago, when the Cambridge Analytica scandal broke. Through harvesting user data on Facebook in the 2010s, information brokers like Cambridge Analytica were given enormous power and that data was eventually sold to political campaigns. According to PolitiFact, during Trump's 2016 presidential campaign, Cambridge Analytica data was used for months in an effort to swing US voters. Similar data was also believed to have been used during the Brexit campaign in the UK. Official investigations led by the UK government found that “no significant breaches” took place.
Despite the dangers of social media overuse being well established, the wider social media landscape has continued to grow at an estimated rate of 5.7% each year. The AI boom is even bigger, with projections showing that the AI industry will be worth 5x more over the next five years.
AI can have a dangerous effect in this politicised social media environment. While AI tools like ChatGPT can be good for quickly collating information from across the web and presenting it in an easy-to-understand way, these systems are not perfect and have been known to make mistakes. In some cases, ChatGPT can fabricate information entirely with so-called AI ‘hallucination'. A few lawyers – smart, well-educated people by most standards – got caught out by this when they presented legal filings with citations invented by ChatGPT and were subsequently fired.
Aside from inventing facts, highly politicised AI poses the threat of ‘fragmenting' the truth. Imagine a left-leaning AI trained exclusively on progressive ideals and current ideas about social justice, or a right-leaning bot trained to suppress a country's ‘uncomfortable history' and present nationalistic viewpoints, or a system trained entirely on the viewpoints of one religious group. People are already susceptible to avoiding critical thinking when taking on the information AI feeds them. As a result, the danger of politicised AI systems is largely in the ability to present people with programmable worldview confirmation, on demand 24/7.
If AI is going to be the next great interface for information, then its integrity matters. A misaligned AI from a politician or media outlet has the power to spread falsehoods faster, more believably, and at a greater scale than any traditional campaign ad. Misinformation was already an issue with social media in general, but with AI, the system can now talk back and shape the perception of reality in real time.
In the future, we expect to see AI being implemented in plenty of industries. Political campaigns could use AI chatbots to answer questions from potential voters, influencers are already exploring using their likeness to produce personalised AI chatbots, or to replace themselves online entirely, and fringe groups could create AIs that centre around niche occult interests or promote conspiracy theories.
At KitGuru, we value clarity, evidence, and honest analysis. That means recognising when AI becomes less about truth and more about narrative control. As AI tools become more integrated into our daily lives, it is vital we ask – who trained this model? And what data was used?
KitGuru says: At this point, I feel like we all know someone who's answer to everything is ‘just ask ChatGPT', with little thought given to where the AI is getting the information from. Do you regularly use AI tools either in work or at home? How do you feel about the rapid adoption of this technology?
The post
The dangers of AI in politics and social media first appeared on
KitGuru.