Bill Gates, 2015

“I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

On January 28, 2015, Bill Gates logged onto Reddit for a wide-ranging “Ask Me Anything” session. Amid questions about philanthropy, climate change, and Microsoft, one reply stood out for how directly it tackled the future of artificial intelligence. Asked about the risks of machine superintelligence, Gates replied that he was “in the camp that is concerned,” sketching a future in which AI first takes over many jobs in helpful ways before eventually becoming powerful enough to pose a genuine risk. Coming from one of the central figures of the personal computing era, the comment helped signal that worries about advanced AI were no longer confined to science fiction writers or niche researchers, but had entered mainstream technology conversation.

In the quote, Gates draws a clear line between today’s AI systems and those that might emerge later. He imagines a near-term phase in which machines handle vision, speech, and pattern-recognition tasks across factories, hospitals, and offices. That stage, he argues, “should be positive if we manage it well,” reflecting a long-running view that automation can raise productivity and living standards if societies adapt with new policies, skills, and safety nets. The concern comes in the next phase, “a few decades” out, when systems could become vastly more capable than humans at reasoning and decision-making. At that point, he suggests, society will need to confront hard questions about control, alignment with human values, and who benefits from such power.

Gates also places himself alongside other high-profile figures who, around the same time, were warning that AI deserved much more scrutiny. By saying he does not “understand why some people are not concerned,” he captures a widening divide: on one side, those who see AI as just another software tool; on the other, those who view it as a general-purpose technology that could reshape economies and security in unpredictable ways. His comment stops well short of calling for a halt to AI research. Instead, it can be read as an argument for responsible development—investing early in technical safeguards, regulatory frameworks, and public debate so that gains from automation do not come at the cost of stability or safety.

A decade later, with advances in generative models and increasingly capable automation tools, Gates’s remarks feel even more relevant to ongoing policy and industry debates. The tension he outlines—between AI as a positive force for productivity and AI as a potential source of systemic risk—now shapes discussions in boardrooms and legislatures worldwide. For readers, his Reddit quote offers a concise way to think about the moment we are in: enjoy the benefits of today’s systems, but plan seriously for the possibility that future AI could outstrip human capabilities in important domains. That mix of optimism and caution remains at the heart of how technology leaders, researchers, and citizens continue to grapple with intelligent machines.

On January 28, 2015, Bill Gates joined a Reddit “Ask Me Anything” session that covered everything from his philanthropy to the future of technology. When a user asked him about the risks of artificial intelligence, Gates replied that he was “in the camp that is concerned about super intelligence,” outlining a future in which AI first helps with many tasks before eventually becoming powerful enough to pose a serious concern.

The quote quickly stood out because it came from one of the most influential figures in modern computing. At a time when AI was still largely associated with narrow tasks like image recognition and recommendation systems, Gates’s remarks helped bring long-term questions about machine intelligence, safety, and control into the mainstream tech conversation.

In his response, Gates sketched a two-stage future for AI. First, he anticipated systems that would “do a lot of jobs for us and not be super intelligent,” improving productivity in areas like logistics, healthcare, and office work. That phase, he suggested, could be “positive if we manage it well,” assuming societies adapt through new skills, policies, and safety nets for workers affected by automation.

The concern, in his view, comes later—“a few decades after that”—when AI could become far more capable than humans at reasoning and decision-making. At that point, questions shift from efficiency and convenience to control and alignment: how to ensure advanced systems remain under human oversight, act according to human values, and do not create new kinds of systemic risk in areas such as security, infrastructure, or the global economy.

Gates’s comment also placed him alongside other high-profile figures who were beginning to speak publicly about AI risks. By saying he agreed with Elon Musk and “didn’t understand why some people are not concerned,” he highlighted a growing divide between those who treated AI as a routine software upgrade and those who saw it as a general-purpose technology with potentially far-reaching consequences.

In the years since, rapid advances in machine learning and generative AI have given his remarks renewed visibility. Policymakers, researchers, and industry leaders now regularly debate how to balance short-term benefits with long-term safety, governance, and ethics. Gates’s 2015 quote captures that tension in a single statement: embrace the promise of AI, but plan seriously for the moment when its intelligence could rival—or surpass—our own.

Explore more "Quotes of The Day"

Discover more notable quotes from influential voices across politics, science, business, technology, sports, and culture. Each quote offers insight into how ideas, beliefs, and decisions shape the world around us.