ANDRZEJ KRAUZE

I’ve been thinking a lot about artificial intelligence (AI) lately. And I figured it was apropos to muse on the topic here, as an introduction to our August issue, which will live only as 1s and 0s in the digital landscape, and not printed on paper as usual.

As a recovering neo-Luddite, I’ve come to accept the fact that increasingly savvy robots are already replacing humans in a variety of roles, from manning the assembly line in warehouses and driving trucks to working in the laboratory and (gasp!) reporting the news. I also firmly believe that humanity possesses qualities that are, if not irreplaceable, at least tough to replicate even with the most advanced technology.

For example, I have little difficulty conceiving of some John Snow–bot triangulating the source of a cholera outbreak to a pump handle. Indeed, epidemiology is a field into which AI has already...

I have little doubt that the many talented minds behind AI research will one day crack these anthropomorphic nuts and create nonhuman entities that are capable of approximating human thought processes. But the main thrust of the field currently seems more about ramped-up processing speed to support machine learning, and the development of adaptive algorithms and neural networks that undergird it.

So will an AI system ever approach the oft-times nonsensical meanderings of human thought and creativity? Answering that question will require the scientific community to do something it has always struggled with: define “intelligence.” Human cognition (not to mention that of other animals) has long been a slippery concept for neuroscientists to wrap their research around. With the advent of tools like optogenetics, CRISPR, and brain organoids, scientists have come closer to being able to interrogate simple thought processes and pathologies that occur in the animal brain. Yet the “mind” has for centuries remained something of a black box, more readily explored by psychoanalysts, philosophers, and artists than by experimentalists.

But as technology and science progress, human intelligence may be suffering by outsourcing various thought processes to computers. When was the last time you engaged in a rigorous, adult conversation—say about history or world geography—in which the superior “intelligence” of Google was not invoked? Humans may be getting dumber as a direct result of the gadgets designed to make our lives easier. 

You don’t, however, have to take my curmudgeonly word for it. Just last month, Noriko Arai, an AI expert at Japan's National Institute of Informatics, offered a dire warning in the Kyodo News: “The advancement of machines? That’s understandable. But the decline of humans? That’s a problem that needs to be addressed immediately. The future is very, very scary.”

Arai went on to explain that her own research had revealed that she and her collaborators had built and trained an AI-based system that managed to score better than 80 percent of high school students on a standardized national university entrance test. The biggest gulf between human and machine: reading comprehension.

Arai’s advice to modern humans, struggling to maintain their cognitive distinction in an age of rapidly advancing AI technology? “Be creative. Robots can’t be.”

And that’s what we aim to do with this issue’s worth of stories and infographics. Also, be sure to stay tuned to The Scientist. We’re planning an expansive special issue on artificial intelligence next year, and I’ll be sure to share the insights we gain by diving into the forefront of the research and engineering that is changing our world and our minds. 

Bob Grant

Editor-in-Chief

Interested in reading more?

Magaizne Cover

Become a Member of

Receive full access to digital editions of The Scientist, as well as TS Digest, feature stories, more than 35 years of archives, and much more!