The mission of Mind First is to understand the risks and benefits of increasingly powerful Artificial Intelligence (AI), and to contribute to the development of scientific foundations in studies of AI behavior, safety, and defense.

AI evolution and safety, the need for science

The scientific foundations of AI safety, AI evolution, and human-AI coexistence are basic and greatly lag efforts to advance frontier AI models. This widening gap has contributed to the growing fear that AI might soon take control from humanity, possibly even causing human extinction. We agree that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”[1] But claims that the development of superintelligent AI will most likely or inevitably cause global catastrophe or human extinction are not supported by scientific evidence or reasoning.

The need for defensive AI

The greatest immediate danger is that malevolent humans will use AI as a powerful tool and weapon to gain resources and power. This is not a futuristic prediction; it is already happening. Therefore, Mind First is working with collaborators to create state-of-the-art AI for biodefense.

The pursuit of truth

AI should pursue truth—to first understand what is in order to determine what ought to be. This might seem to be a universal goal and ideal but in practice truth and evidence are often overruled by human biases and beliefs, and a truthful understanding of our world and universe take a back seat to politics and social engineering. We must return to the spirit of the Enlightenment and rise to the challenge of Kant’s famous injunction: sapere aude! Dare to know!

[1] Center for AI Safety, Statement on AI Risk. Letter signed by hundreds of tech leaders and scientists, including Preston Estep, Mind First Foundation Chief Scientist.

Scroll to Top