In March 2021, a driver in Charlton, Massachusetts, plunged his car into Buffumville Lake while following GPS directions. Rescue teams were called to recover the completely submerged vehicle from 8 feet of water. The driver thankfully escaped with just a few minor injuries. When asked why he drove into a lake despite being able to see the water ahead, his answer was simple: the GPS told him to go that way. We’ve all heard these stories, and let’s face it, they sound ridiculous. But here’s the thing: We are all somewhere on this spectrum of conveniently handing over decisions to our friendly bots.
The Silent Surrender of Decision-Making
As a society that prizes autonomy and independence, it’s surprising that we’ve gradually outsourced more of our decision-making to algorithms, often without even realizing it. What began with navigation has now expanded into nearly every aspect of our lives. We defer to recommendation engines for what to watch, read, eat and believe. We consult A.I. for career advice rather than developing our own criteria for meaningful work. We ask chatbots about relationship compatibility instead of honing our emotional intelligence. It’s hard to see where the machines end and our minds begin. Digital tech is now a literal extension of our minds, and we urgently need to treat it as such.
The convenience is undeniable. Why struggle with choices when an algorithm can analyze thousands of variables in milliseconds? Why develop your own expertise when you have access to a myriad of geniuses in your pocket? But this convenience comes with a subtle cost: our agency as human beings. We think of technology as supercharging what we want to do anyway, but there’s a thin line between facilitating what we want and manipulating it. The way this happens is often so subtle that we barely notice it happening, explains Karen Yeung, a scholar researching what she calls “hypernudging,” or how A.I. shapes our preferences.
Music streaming services don’t merely serve what you like; they gradually shift your musical taste toward more commercially viable artists by controlling your exposure. News aggregators don’t just deliver information; they subtly emphasize certain perspectives, slowly molding your political opinions. Media theorist Marshall Mcluhan recognized this dynamic decades ago when he shared the observation that first we shape our tools and then our tools shape us. Today’s algorithms don’t just respond to our choices; they actively and intimately shape them.
Living in Narrowing Information Landscapes
Any skilled delegator will tell you that one of the most satisfying things about outsourcing decisions is that it frees up the mind. And it’s true. If Sunday mornings are always pancakes, you don’t have to think (or negotiate!) with anyone about what’s for breakfast. The problem is, when we delegate our primary information feeds—news, search and social media—it starts narrowing down our core understanding of reality. To some extent, this is essential for our sanity, as there is simply too much information to process. But what happens to our ideas, motivations and actions when what we perceive as the world—our reality—is increasingly limited?
Multiple algorithmic effects are at play simultaneously. Despite the illusion of infinite choice, our information landscape narrows through personal filtering and cultural homogenization, leaving us with increasingly limited perspectives. In Filterworld, Kyle Chayka explains how algorithms have flattened culture by rewarding certain engagement patterns. Content creators worldwide chase similar algorithmic rewards, producing remarkably similar outputs to maximize visibility. TikTok-optimized homes, Instagram-friendly cafés and Spotify-formatted songs are all designed to perform well within algorithmic systems.
“This one’s for Algorithm Daddy!” explains actress and activist Jameela Jamil, as she posts a selfie in a revealing dress—a gamified move she feels she has to make whenever she starts noticing the algorithms suppressing her more substantive social justice content. Cultural diversity suffers similarly because content not in English is less likely to be included in A.I. training data.
Hundreds of people interviewed described this paradoxical feeling: overwhelmed by choice and a bit suffocated by algorithmic recommendations. “There are endless options on Netflix,” one executive said, “but I can’t find anything good to watch.” How can we make truly informed choices when our information diet is so tightly curated and narrowed?
Our Gradual Brain Atrophy
A famous study of London taxi drivers showed that their hippocampi—the brain regions responsible for spatial navigation—grew larger as they memorized the city’s labyrinthine streets. Thanks to neuroplasticity, our brains constantly change based on how we use them. And it works both ways: when we stop navigating using our senses, we lose the capacity to do so. For example, when we rely on A.I. for research, we don’t develop the core skills to connect ideas. When we accept A.I. summaries without checking sources, we delegate credibility evaluation and weaken our critical thinking. When we let algorithms curate our music, we atrophy our ability to develop personal taste. When we follow automated fitness recommendations rather than listening to our bodies, we diminish our intuitive understanding of our physical needs. When we let predictive text complete our thoughts, we start to forget how to express ourselves precisely.
In The Shallows, Nicholas Carr explores how our brains physically change in response to internet use, developing neural pathways that excel at rapid skimming but atrophy our capacity for sustained attention and deep reading. The philosopher-mechanic Matthew Crawford offers a compelling antidote in Shop Class as Soulcraft, arguing that working with physical objects—fixing motorcycles or building furniture—provides a form of mental engagement increasingly rare and precious in our digital economy. These are important and tangible trade-offs that fundamentally change us, and while they may seem inevitable in our digital worlds, recognizing how we’re shaped by every tool we use is the first step toward becoming more aware and intentional about technology.
Reclaiming Our Algorithmic Agency
The good news is that there are ways to regain control and maintain human agency in our digital lives. First, recognize that defaults are deliberate choices made by companies, not neutral starting points. Research consistently shows that people rarely change default settings. Did you know you can view Instagram posts chronologically rather than by algorithm-determined “relevance”? How many people utilize ChatGPT’s customization features? These options often exist for power users but remain largely unused by most. It’s not just about digging into the settings; it’s a mindset. Each time we accept a default setting, we surrender a choice. With repetition, this creates a form of learned helplessness—we begin to believe we have no control over our technological experiences.
Second, consider periodic “algorithm resets.” Log out, clear your data or use private browsing modes. While it’s convenient to stay logged in, this convenience comes at the cost of increasingly narrow personalization. When shopping, consider the privacy implications of centralizing all purchases through a single platform that builds comprehensive profiles of your behavior. Amazon Fresh, anyone? Third, regulatory frameworks that protect cognitive liberty should be supported. As A.I. is able to read thoughts and manipulate them, Professor Nita Farahany is among those making the case for a new human rights framework around the commodification of brain data. If not, Farahany believes that “our freedom of thought, access and control over our own brains, and our mental privacy will be threatened.”
The algorithmic revolution promises unprecedented benefits. But many of these threaten to come at the cost of our agency and cognitive independence. By making conscious and intentional choices about when to follow algorithmic guidance and when to do our own thing, we can stay connected to what we value most in each situation. Perhaps the most important skill we can develop these days is knowing when to trust the machine and when to trust our own eyes, instincts, and judgment. So the next time a bot tries to steer you into a metaphorical lake, please remember you’re still the driver. And you’ve got options.
Menka Sanghvi is a mindfulness and digital habits expert based in London. She is a globally acclaimed author and speaker on attention, tech and society. Her latest book is Your Best Digital Life – Use Your Mind to Tame Your Tech (Macmillan, 2025). Her Substack newsletter, Trying Not To Be A Bot, explores our evolving relationship with A.I.