Fifty-five years ago, a sentient supercomputer struck fear into millions of moviegoers with a chilling phrase:
“I’m sorry, Dave. I’m afraid I can’t do that.”
The trope of artificial intelligence (AI) as the plot twist in Stanley Kubrick’s futuristic dystopia 2001: A Space Odyssey is entertaining; the reality is far more mundane, yet crucial. We must ensure AI technology advances responsibly. Industries and global leaders must work together to shape our technological future while advancements are in early development by coming together to create new possibilities that bring out the best in our human selves.
AI has already created global change and provided us with powerful tools. It has the potential to enable a responsible, inclusive, and sustainable future. We harness the power of AI to tackle critical global challenges like pandemics, natural disasters, and global public health. And we are developing AI capabilities and solutions to amplify human potential, enhance inclusion, and improve accessibility for people with disabilities.
Read more: DeepMind’s CEO Helped Take AI Mainstream. Now He’s Urging Caution
When we create something new, it is incumbent upon us to ask, “Have I made society better—or worse?” If the technology cannot be proved to be good, the engineering remains incomplete. Only when it’s demonstrably and repeatedly better than any non-AI experience can it become a new standard.
There must always be a scientific and data-driven basis for the introduction of technology, specifically AI, and governance that guides the journey. Neutral is akin to negative in the early phases. We have experienced some darkness amid the tumult of the past couple of years, showing us how simple it is to find our world teetering on the edge of pervasive tech for bad. We cannot let ourselves blindly follow economic and algorithmic innovations run amok; we cannot chase metrics like click-through rates or time on websites. Technology must consistently demonstrate outcomes superior to existing human results and provide an improved experience.
When it comes to innovation, the question is no longer if something can be done, but why. AI is already performing human tasks that used to be difficult to achieve with traditional computing. Machines will soon make more decisions than humans. Our role as humans is to make sure those decisions are better and more ethical by utilizing rigorous, collaborative, multidisciplinary peer-review processes throughout the development life cycle, establishing diverse development teams to reduce biases. We must also acknowledge potential ethical and human-rights risks associated with the development of AI technology, and we are constantly in a race between positive and negative outcomes. We can mitigate potentially harmful uses of AI while also anticipating the law of unanticipated consequences for when technology is both a problem and a solution.
Technology itself is inherently neutral; we must constantly shape it as a force for good. The technology industry must serve as the role model for companies across all industries making breakthroughs using systems enhanced with AI technology. When built and used responsibly, AI will create prosperity and enrich lives.
Tomorrow will be the better for it.
More Must-Reads from TIME
- Breaking Down the 2024 Election Calendar
- How Nayib Bukele’s ‘Iron Fist’ Has Transformed El Salvador
- What if Ultra-Processed Foods Aren’t as Bad as You Think?
- How Ukraine Beat Russia in the Battle of the Black Sea
- Long COVID Looks Different in Kids
- How Project 2025 Would Jeopardize Americans’ Health
- What a $129 Frying Pan Says About America’s Eating Habits
- The 32 Most Anticipated Books of Fall 2024
Contact us at letters@time.com