top of page

New Bias on the Block: Algorithmic Puppet Masters

Modern technology has turbocharged old psychological biases while also spawning a whole new breed of “algorithmic bias”—a sneaky, pervasive force deciding what people see, how emotions are shaped, and even who appears in digital lives. In 2025, over 5.4 billion people interact with personalized social media algorithms daily, yet few realize how these systems don’t just cater to bias—they manufacture it, remix it, and serve it with a side of targeted remarketing. Today's Eclectic Leadership Movement post explores how algorithmic bias hijacks beliefs, emotions, and relationships, sometimes even becoming the “root cause” of new psychological biases, and offers a few eclectic remedies along the way.​

ree

New Bias on the Block

The world’s algorithms—think Facebook, TikTok, YouTube—aren’t just code. They’re digital puppeteers, pulling strings to show certain posts more, ensuring that “people the algorithm thinks should be in touch” pop up like that one relative who never gets the hint to leave the party. Their impact isn’t subtle:​

  • Echo chambers and filter bubbles shield users from alternative views, creating “bespoke realities” and amplifying tribal thinking.

  • Minority voices often get relegated to the basement, rarely climbing the digital stairs to the main feed.​

  • Sensational, divisive content is prioritized for engagement—if outrage were an Olympic sport, algorithms would win gold every four years.​

  • 65% of all online videos consumed in 2025 are now watched via personalized algorithmic feeds, up from 52% in 2020.​


And let’s not forget the sponsored content puppy parade, where brands (and sometimes misinformation) are parachuted into feeds based on data points as obscure as “likes pineapple on pizza.”


Algorithmic Bias: Not Your Classic Cognitive Glitch

Traditional biases—confirmation bias, in-group bias—have always been part of human wiring. But algorithmic bias flips the sequence: technology now manufactures biases first, and only then do traditional psychological distortions get grafted onto the new digital reality.​


For example, a 2025 study showed that people exposed to biased AI recommendations shifted their own emotional judgments 32% of the time, even when the algorithm’s assessment was completely arbitrary. It’s like trusting the opinion of a robot barista about your coffee order—except, in this case, your worldview is what’s at stake.


The Domino Effect: From Feeds to Feelings to Beliefs

Algorithmic bias erodes mental wellbeing by reinforcing stereotypes and engineering social comparison traps. Consider:​

  • Social media prioritizes idealized, successful posts, triggering feelings of inadequacy and envy thanks to an unending barrage of curated happiness and achievement (cue “keeping up with the Kardashians: Algorithm Edition”).

  • Algorithms reinforce “stereotype threats” by amplifying content that cements negative group stereotypes, harming confidence and mental health among marginalized users.

  • Feedback loops ensure that content causing emotional spikes appears more often, distorting perceptions of reality and fueling anxiety, polarization, and cynicism.​


Is This the First Tech-Driven Bias Revolution?

While propaganda and groupthink predate the iPhone, this is arguably the first era where computer code, not just cultural narratives, directly scripts who and what users believe, like, and trust—at global scale and breakneck speed. Never before could a change in a ranking algorithm alter billions of beliefs literally overnight.​


Imagine attending a dinner party where the host (the algorithm) decides which guests you’ll see most, which stories you’ll hear, and what’s on the menu—based on every scrap of gossip, taste, or quirk you’ve ever shown. You can never leave, and the host sometimes chucks out guests who express unpopular views. It’s amusing, until you realize this dinner never ends, and who you become depends on who the host favors.


So, what can one do? Let’s raid the eclectic toolkit for solutions—because what worked for old-fashioned bias won’t quite cut it when the adversary reads data faster than one can refresh Instagram.


Practical Remedies (with a Pinch of Eclecticism)

  • Algorithmic Transparency: Advocate for platforms to disclose how content is ranked, flagged, and timed. Secret sauce is best enjoyed with a hint of the recipe.​

  • Algorithmic Audits: Push for independent “bias auditors” and “red teams” to spot and call out unfair outcomes, as AI watchdogs are now as important as any “citizen’s arrest”—just more technical.​

  • Digital Literacy: Foster critical media skills; encourage questioning, not just scrolling. If information tastes too tasty, double-check the ingredients.​

  • Eclectic Curation: Mix information sources—follow different thinkers, subcultures, and ideologies. Be “omnivorous” in digital diets: sample widely, season with curiosity, and avoid echo-chamber MCs.​

  • Multidisciplinary Review: Bring in ethicists, linguists, sociologists, and community representatives to participate in technology development—this is eclecticism in action, realizing that every bias is best seen with multiple pairs of glasses.​

  • Data Diversity: Support datasets and teams that reflect the full spectrum of society. Diversity isn’t just fair; it’s algorithmic insulation.​

  • Feedback Mechanisms: Demand customizable feeds and user feedback options so no single algorithmic “dinomaster” has exclusive control.​

  • Tech-Free Time: Build routines for regular algorithm “fasts.” Sometimes, the mind needs a stroll in analogue reality.

  • Policymaker Action: Support governance frameworks that combine anti-bias laws, regulatory “sandboxes,” and strong civil rights for the algorithm age.​


Eclecticism: The Bias-Antidote Philosophy

Being eclectic—drawing from varied disciplines, cultures, and systems—means consciously zigzagging beyond comfortable algorithmic bounds. Eclectic leaders and thinkers:

  • Cross the digital aisle regularly, engaging with out-group ideas and communities.

  • Employ mindful self-observation to catch when digital environments feed comfort-zone cravings.

  • Use translanguaging techniques from applied linguistics to expose underlying algorithms of thought, not just digital code.

  • 71% of users rarely click on content beyond the first algorithmic recommendation, letting digital bias reinforce itself in ways the ancient Greeks would have called “tragic fate”.​

  • Minority-led content is 30% less likely to be shown to users with neutral engagement habits—sometimes less visible even than adverts for toenail fungus.

  • In one AI bias study, 90% of participants changed their initial judgments after exposure to a biased algorithm’s opinion.​


Bias in Human Hands — and Beyond

The bottom line: there’s no escaping bias, but there’s extraordinary danger in ceding its creation and amplification to systems without oversight or eclectic input. Navigating algorithmic bias in 2025 calls for a courageous, eclectic attitude—one that is as comfortable with literature as with legislation, and with self-reflection as with data science. The best remedy is eclecticism itself: to read widely, question deeply, code compassionately, and never let a single algorithm define what it means to be a human, a friend, or a leader.

Yours truly,

Shehzaad Shams,

London, 15th October 2025

Comments


 

© 2025 Rononiti, All Rights Reserved, Company Number 11918817, Registered in England & Wales.

 

bottom of page