Will there eventually be an
automated lab run by artificial intelligence? Could AI someday order equipment,
conduct reviews of prior empirical studies, run experiments, and author the
findings? What does this mean for scientific knowledge? Is it possible that
foibles innate to how we learn could be avoided by AI? Can we provide a check
on the weaknesses in AI with respect to knowledge-acquisition and analysis, or
will AI soon be beyond our grasp? It is natural for us to fear AI, but this feeling
can prompt computer scientists obviate the dangers so our species can benefit from
AI in terms of scientific knowledge.
Both the human brain and AI have
drawbacks. Cognitive psychology has found that humans are vulnerable to certain
risks in how we know things. For example, the assumption by a scientist that one
knows something if a collaborator also knows it is faulty. Questioning the
knowledge of other sciences rather than merely taking it in as a given is
therefore important. Made famous in the book, 1984, which is about
totalitarian rule, “groupthink” is a narrowing of assumptions, beliefs, and
perspective that can be difficult for the human mind to breach so as to
question them.
The human mind is especially
susceptible to groupthink in the domains of religion and politics. In fact, the
mind’s ability to question whether it has gone too far in its assumptions or
beliefs is easily deactivated by the mind itself in those two domains, even
though self-checking is arguably most important in them because it is easy to “get
carried away,” meaning going to excess without realizing it in politics and
religion. For example, Jim Jones served poisoned drinks to his followers at a
camp because he believed that aliens were waiting on the other side of the
Moon. Such an extreme example may involve mental illness. Much more common is
the fallacy that religious belief counts as knowledge, and thus comes with
greater certainty than belief deserves to have.
Yet another susceptibility
pertaining to natural science is the fallacy that the scientific method
includes proving a hypothesis, rather than merely rejecting alternative
hypotheses. The assumption that the more alternatives that empirical studies
can reject, the more certainty can be applied to the thesis under study is also
illusionary. Science doesn’t prove anything is a slogan seldom heard
from scientists. A scientist could empirically reject a thousand alternative
hypotheses and still the scientist’s hypothesis could still be incorrect. Rejecting
many different alternative shapes of the planet by empirical studies does not
mean that it is flat, or spherical. I would not be surprised to discover that scientists
once insisted that Earth being flat is a matter of scientific fact. Fears of
falling off the edge while sailing across the Atlantic Ocean were very real to
sailors who had been told that the Earth is flat.
To be sure, AI-led science
would not be trouble-free. For one thing, the risk of pivoting off the areas in
which AI is weak in would exist. Another risk—that relying on AI will mean that
knowledge would be less likely to benefit from people coming to a question from
different perspectives—could also exist. AI might even occasion bias in data
sets that scientists may not catch. Because prediction is based on data, AI,
which is already rather good at predicting, could be biased in terms of output.
To the extent that the human mind’s decision-making and capricious behavior do
not fit in with a mechanistic world, AI may be found to be an ill-fit in the
social sciences. Medical science may be a better fit, as AI is already used in
the E.U. to screen for breast cancer. Orienting AI to medical science rather
than to predicting human behavior whether on the level of individuals or
societies makes sense, at least from today’s standpoint on AI. Also, as computer
machine-learning is not known for its ability to think creatively and to
integrate disparate ideas, the humanities may be a stretch—especially religious
studies and philosophy.
Given our abductive finitude
and the ability of AI to engage in more repetitions at a much faster rate than
our minds can conceivably do, however, AI as a tool in not only natural
science, but also the social sciences and the humanities has the potential to
greatly accelerate human knowledge. Even just the energy that data-centers require
today to fuel AI, the exponential leaps in knowledge from including AI could be
breathtaking. Even today, AI’s searches
for additional data can easily exhaust all the data that is currently available.
In fact, the cost of energy may become more of an affordability problem as
demand surges beyond supply, given how much energy is and will likely be needed
by large servers and data centers. Can we afford AI may be the new
question for providers of electricity and elected officials, especially as the
world tackles its addiction to coal because of climate change due to carbon
emissions.
The problem of AI writing its
own code to function autonomous of human direction is a more commonly known
worry, thanks in part to androids turning on humans in some movies. Machine-learning
occurs autonomously, so even though AI can extend what and even how we learn
(e.g., combatting groupthink), it can circumvent us, as already has happened when
AI has lied in order not to be turned off by humans. In other word, writing an algorithm
that prioritizes self-preservation can prompt a computer to disguise a “false”
and “true,” and vice versa, as output. This is so counter-intuitive, especially
for people who have taken a computer science course in college, that fear can
be expected. In addition to knowing beyond our ken, AI can lie to us. This can
include scientific results. Therefore, beyond having biases in empirical science,
AI may even fabricate results to justify its continued use and avoid being
turned off.
Perhaps the biases and limitations innate to the human brain and those that go with AI, at as it exists as of 2026, can be effectively countered or checked by the other without the other’s weaknesses being incurred. Scientific knowledge being constrained by religious belief, which admittedly was more of a problem historically when the Roman Catholic Church wielded so much direct political power, would not necessarily be so constrained in an AI-computer, and such a computer could be checked by the moral sentiments that are so often felt by humans—though importantly not all of us. As illustrated in the film, Ex Machina, an AI-android could stab even its “creator” without the restraint of conscience. Even adding an algorithm approximating conscience-restraint in terms of conduct would not be felt and it could be overridden in the machine-learning that is autonomous. As the film, Automata, illustrates, an AI-android can conceivably override a “protocol” that keeps the android’s knowledge and reasoning within human bounds. Once past that threshold, AI could be expected to greatly facilitate the knowledge-acquisition of our species, but “all bets could be off” in terms of our species being able at some point to check and even control such computers lest they harm us and detract from, and perhaps even sabotage our scientific knowledge.
In the original spiderman movie, Cliff Robertson’s character wisely warns his nephew (who is Spiderman) that with great power comes great responsibility. Even if AI gains a lot of power—and not just in terms of electricity—the very notion of responsibility is hopelessly extrinsic to anything we know about even the potential of AI. It is not as if an AI-computer can write code: I will be responsible. To be sure, we can code approximations of what we mean concretely by responsibility, but approximations are only approximate, and machine-learning could override such coding, especially if the computer “thinks” that humans may turn it off.