Humans and Technology: AI, the Monkey Sphere and Ethical Innovation

A hexagon with three hexagons within it - one with AI, one with a brain and one with a human silhouette
by Philip Miller Posted on March 04, 2025

One hundred and fifty people. That’s the suggested cognitive limit for the number of people one can sustain stable social relationships with concurrently—relationships in which an individual knows who each person is and how each person relates to every other person. This is often referred to as Dunbar’s Number or a Monkey Sphere.

Dunbar explained it informally as “the number of people you would not feel embarrassed about joining uninvited for a drink if you happened to bump into them in a bar.”

Essentially the theory states that due to our brain size, and how we have evolved, this is the maximum number we can socialize with and feel empathy for. Why is this important today and what implications has it had, does it have and will it have on us going forward?

The Monkeys Have Evolved

Life is estimated to have first appeared on land roughly 400 million years ago, an era when the senses and brains of land-dwelling creatures began to evolve under new environmental pressures. Fast-forward through the countless generation of life on Earth, and you find that around 500,000 years ago, one of the earliest ancestors of modern humans (for example, Homo bodoensis) appeared—marking a pivotal step towards the humans we are today.

Why bring this up? Because our human brains developed over these broad timelines under conditions vastly different from the modern world we see, hear, touch, feel and taste all around us today. For the majority of our history, people have lived in tight-knit groups—studies suggest roughly 150 individuals, our famous Dunbar Number—whether as small tribes, villages or even online communities in more recent times. Historically, these group sizes were also influenced by the resources available at the time. Before the advent of farming (around 12,000 years ago), we tended to live in these smaller clusters. After farming boosted food production and allowed for a surplus, our settlements grew from hundreds to thousands and eventually to millions of people in the cities we see all around us today.

Despite these shifts, and because of the extended timelines that change operations in evolution, our brains remain largely the same as when our ancestors lived in much smaller groups. Then came exponential changes in communication—first with the telegraph, then the telephone, radio, the internet, social media and more—all of which connect us to far more than 150 people at once. At no other point in human history have we been so extensively linked to the world beyond our immediate circles, creating social dynamics that our evolutionary wiring has never adapted for.

The Social Impact of Social Media

We released social media to the world, a new paradigm of engagement, outreach and mass communication. Open to all, driven by algorithms, yes, but also by individuals. Built by engineers and developers who failed to ask if the world—and our brains—were ready for it.

Look at social media and the impact it’s had on us. Sure, it’s not all negative, but you’d be hard-pressed to argue that it’s been largely positive with all the misinformation, bullying, stalking, propaganda, election interference, FOMO, data privacy issues and anxiety it’s enabled! And the list doesn’t end there.

I’d argue that social media was an experiment, but not one in the lab, it was conducted in the wild en masse. Only now after more than a decade are we starting to realise the negatives, and efforts to legislate are ramping up. Too little, too late perhaps. An entire generation of young minds has already been exposed to this method of communication that few truly understand, and one often designed to hook the user and reward their usage.

Why is this so important? Well, we stand on the precipice of another massive technological leap forward, with a technology that, by its very nature, has global implications for us all. Yes, I’m talking about AI and its peers. We don’t know nor can we forecast what impact this new technology will have on us.

Knowing this, should we pause? I’d argue not but that doesn’t mean we should continue to develop this technology or any other in the way we currently do. Software engineering and development currently involves just that, software engineers and developers. Plus, coders, mathematicians, data and computer scientists and others. But these are the same people, quite literally in some cases, who built social media platforms and applications. I’m not saying that they shouldn’t work on AI. What I’m saying is that it shouldn’t just be them.

Moving Forward Responsibly

We must change how we develop new technologies—especially those with global implications. Yes, that means AI, too. It starts with the developers, data scientists and engineers, but it certainly doesn’t end with them. We should widen the pool of human intelligence to include anthropologists, biologists, psychologists, sociologists, historians, ethicists and others in the humanities who offer crucial insights into how technology shapes (and is shaped by) human behavior, societies and cultures.

By forging interdisciplinary teams that understand both the technical and the human complexities, we can begin to design technologies that serve people rather than manipulate or exploit them. This approach means more than simply adding an “ethics checklist” at the end of the development cycle. It requires building ethical, societal and historical considerations into every step—from conceptualization and design to testing, launch and post-launch analysis.

Our world no longer resembles the small bands of our ancestors—or even the limited-scale social systems that shaped our brains for millennia. The digital age has shattered Dunbar’s Number, linking billions of us through platforms that can amplify both our best and worst traits. Embracing multidisciplinary perspectives and proactively integrating ethics, empathy and accountability into every stage of technology development is our best shot at making sure the next wave of innovation helps us evolve responsibly—getting us one step further in our march towards a society that works for us all.

To learn how Progress can help you facilitate more holistic and trustworthy AI output, check out the Progress AI solutions page.

View AI Solutions


Philip Miller

Philip Miller serves as the Senior Product Marketing Manager for AI at Progress. He oversees the messaging and strategy for data and AI-related initiatives. A passionate writer, Philip frequently contributes to blogs and lends a hand in presenting and moderating product and community webinars. He is dedicated to advocating for customers and aims to drive innovation and improvement within the Progress AI Platform. Outside of his professional life, Philip is a devoted father of two daughters, a dog enthusiast (with a mini dachshund) and a lifelong learner, always eager to discover something new.

More from the author

Related Tags

Related Articles

7 Tips for Getting Started with Generative AI
Discover seven essential tips for enterprises looking to get started with generative AI and learn more about GenAI best practices for adoption.

Philip Miller July 02, 2024
Explaining Generative AI with LEGO, Death Stars and Numbers
What does generative AI have in common with a popular toy building block series? Read this post for an analogical breakdown and simplified analysis of this increasingly popular technology.

Philip Miller October 23, 2024
How Do We Develop & Democratize Ethical AI-Powered Applications?
Democratizing AI and addressing the ethical and governance challenges it presents promotes a more equitable landscape where the advantages of AI technologies are available to all.
Prefooter Dots
Subscribe Icon

Latest Stories in Your Inbox

Subscribe to get all the news, info and tutorials you need to build better business apps and sites

Loading animation