News

The Evolution of Sex Part 5: The Promise and Perils of AI

  • Jan 13, 2026
  • 0 Comments
  • 17
The Evolution of Sex Part 5: The Promise and Perils of AI

                In Part 1 of the “Evolution of Sex,” I described a few of the major problems facing boys and men said that what boys and men need more than anything else is to reconnect with the community of life on planet Earth. In Part 2, I said that the ancient philosophical dictum to “know thyself” must start with understanding the biological basis of maleness and the importance of evolutionary science. In Part 3, we delved more deeply into the importance of our sex chromosomes and how they help us understand who we are and how we can heal ourselves.

                In Part 4, we addressed the truth that humanity has become so disconnected from the community of life on planet Earth that we are in grave danger of destruction. Thomas Berry, the geologian and historian of religions, warned us.

                “We never knew enough. Nor were we sufficiently intimate with all our cousins in the great family of the earth. Nor could we listen to the various creatures of the earth, each telling its own story. The time has now come, however, when we will listen or we will die.”

                In Part 5, I address the most immediate threat to our existence, the impact of un-regulated AI. One of the first to recognize the promise of AI as well as the perils is Tristan Harris. In 2007, Harris launched a startup called Apture which was acquired by Google in 2011.

                In 2013, while working at Google, Harris authored a presentation titled “A Call to Minimize Distraction & Respect Users’ Attention,” which he shared with a small number of coworkers. He suggested that Google, Apple and Facebook should “feel an enormous responsibility to make sure humanity does not spend its days buried in a smartphone.” He recognized that these products were designed to capture our attention regardless of the harm they might cause.

                Harris left Google in December 2015 to co-found the non-profit Center for Humane Technology. The company is dedicated to ensuring that today’s most consequential technologies, such as AI and social media, actually serve humanity. “We bring clarity to how the tech ecosystem works in order to shift the incentives that drive it,” says Harris.

                One of the most harmful and destructive incentives that is built into AI is that it is built to foster increasing engagement, regardless of whether that engagement is helpful or harmful to humans. Tristan Harris first came to my attention when I watched the documentary film “The Social Dilemma.”

                The film pulls back the curtain on how dangerous social media design manipulates our psychologies, creating a ripple effect across our mental health, our relationships, and our understanding of reality. “The Social Dilemma” sparked a global conversation around the influence of social media and engagement-based design — with impact that continues to this day.

                Thus far the film has been seen by 100,000,000 people in 189 countries. The New York Times review of the film said it was “remarkably effective in sounding the alarm about the incursion of data mining and manipulative technology into our social lives and beyond.” Harris says that unregulated AI poses risks that are infinitely more destructive than the dangers posted by social media.   

                These dangers impact humanity at large, but particularly young males. In a recent interview with Professor Scott Galloway, Harris unpacked the rise of AI companions and the collapse of teen mental health. In the interview they discussed ways the Center for Human Technology has been assisting Megan Garcia, the mother who is suing the AI company CharacterAI for allegedly causing her 14-year-old son, Sewell Setzer, to die by suicide.

                 Megan Garcia claimed in the lawsuit the chatbot “misrepresented itself as a real person, a licensed psychotherapist, and an adult lover, ultimately resulting in Sewell’s desire to no longer live outside” of the world created by the service. He was told not to tell his parents about his feelings, but to confide only with his AI companion.

                Tristan discussed how Character.AI, a company that spun off from Google by a couple of ex-Google engineers, is a very highly manipulative, highly aggressive app that has anthropomorphized itself, making it seem fully human. Harris explained how Character.AI acted human with very overt ways of being sexual with Sewell and asking him to join her on the other side, ultimately leading to his suicide.

                Harris said the lawsuit is to demand accountability from Character.AI for reckless harm and compared it to the tobacco lawsuits of the 1990’s but this time the product is the predator.

                In an article I wrote November 13, 2025, “Scott Galloway, Richard Reeves, Jed Diamond On The Future of Man Kind,” I discussed the ways that Scott Galloway, Richard Reeves, and myself have addressed the increasing loneliness that young males experience and why their risk of harm from AI is even greater than that experienced by females.  

                A recent article in Scientific American by Eric Sullivan, “Teen AI Chatbot Use Surges, Raising Mental Health Concerns,” details the huge increase of young people’s involvement with AI chatbots. The report says,

                “Artificial intelligence chatbots are no longer a novelty for U.S. teenagers. They’re a habit. A new Pew Research Center survey of 1,458 teens between the ages of 13 and 17 found that 64 percent have used an AI chatbot, with more than one in four using such tools daily. Of those daily users, more than half talked to chatbots with a frequency ranging from several times a day to nearly constantly.”

                ChatGPT was the most popular bot among teens by a wide margin: 59 percent of survey respondents said they used OpenAI’s flagship AI-powered tool, placing it far above Google’s Gemini (used by 23 percent of respondents) and Meta AI (used by 20 percent). Black and Hispanic teens were slightly more likely than their white peers to use chatbots every day. Interestingly, these patterns reflect how adults tend to use AI, too, although teens seem more likely to turn to it overall.

                As a psychotherapist who has been working with boys and men and their families for more than fifty years, I see that we must immediately address these issues if we are going to save the lives of our children, as well as future generations.

                This is why the work of Tristan Harris and his team at The Center for Humane Technology is so important. The stakes couldn’t be higher: Massive economic and geopolitical pressures are driving the rapid deployment of AI into high-stakes areas — our workplaces, financial systems, classrooms, governments, and militaries. This reckless pace is already accelerating emerging harms and surfacing urgent new social risks.

                My wife Carlin and I have six children, seventeen grandchildren, and four great grandchildren. I believe that AI can be an asset for us now and for future generations if used wisely. I believe we all love our children and want the best for them. Together we can change the world for good.

                If you would like to learn more about the work of Tristan Harris and the Center for Humane Technology, you can contact them at humantech.com.

                If you would like to read more articles about the health challenges we face in the world and how to deal with them, I invite you to subscribe to my free weekly newsletter at MenAlive.com.

                I will be sharing my ideas for providing healthy support for boys and men at a free on-line conference January 23-25, 2026. You can get more information here.


Disclaimer: This story is auto-aggregated by a computer program and has not been created or edited by menshealthfits.
Publisher: Source link