Multiple sources have reported that Geoffrey Hinton, often called the godfather of artificial intelligence (AI), recently introduced the idea of an AI mother model during discussions about AI’s potential to surpass human intelligence.
Hinton has dedicated decades to developing the neural network technology that forms the basis of current large language models, image generators, and autonomous systems.
Dan Costa of Worth Magazine reports that Hinton warned that AI currently poses a risk to factory employment, knowledge-driven roles, creative industries, and certain scientific research fields.
He also emphasized the need to accelerate the development of AI’s positive applications, including advancements in medical research, climate modeling, and protein folding.
At the AI4 conference this August, Hinton notably stated that we need AI mothers rather than just assistants. He opposed creating systems whose main strength is outsmarting humans.
He remarked that in 50 years, people will only remember whether we developed technologies that endangered human life.
Consequently, he called on companies to invest in research focused on safety, interpretability, and cross-industry standards.
He covered various AI-related topics, including the release of GPT-5, misunderstandings about AI, and the timeline for achieving Artificial General Intelligence (AGI).
The audience of around 5,000 AI leaders was eager to hear from someone who has dramatically shaped the field.
In a social media post, Luiza Jarovsky, PhD, highlighted Hinton’s proposal to incorporate “maternal instincts” into AI models so they would care for people even as the technology surpasses human abilities.
As a mother herself, Jarovsky raised several questions about this concept:
Can machines truly have “maternal instincts”? Do you think this analogy is appropriate? Can machines genuinely “care” for people? Is this the correct terminology? Are instincts or feelings programmable? Is this idea technically achievable? Should we build machines that might threaten human survival?
Earlier this year, Hinton estimated a 10% to 20% chance that AI could cause human extinction.
Jarovsky stressed the urgency of AI governance efforts, encouraging everyone to engage with the emerging legal and ethical challenges of AI and to consider possible ways forward.
"Instead of forcing AI to submit to humans, Hinton presented an intriguing solution: building 'maternal instincts' into AI models, so 'they really care about people' even once the technology becomes more powerful than humans" (?)
I have so many questions (maybe even more as a… pic.twitter.com/bQ1DyZB41c
— Luiza Jarovsky, PhD (@LuizaJarovsky) August 13, 2025
“Regulation tends to follow harm, not anticipate it. With AI, that lag could be catastrophic.” –Geoffrey Hinton https://t.co/fYUVM00ymG
— Dan Costa (@dancosta) August 15, 2025
The U.S. Army Corps of Engineers has been tasked with…
Brown and Caldwell, a leading environmental engineering and construction firm,…
Humboldt State University, one of four campuses within the California…