The Myth of Basic Science
Matt Ridley - Wall Street Journal Oct 23rd
2015
Does scientific
research drive innovation? Not very often, argues Matt Ridley: Technological
evolution has a momentum of its own, and it has little to do with the
abstractions of the lab.
Isaac Newton (16421727) uses a prism to separate
white light into the colors of the spectrum, watched by his
Innovation is a mysteriously difficult thing to
dictate. Technology seems to change by a sort of inexorable, evolutionary
progress, which we probably cannot stop, or speed up much either. And it’s not
much the product of science. Most technological breakthroughs come from
technologists tinkering, not from researchers chasing hypotheses. Heretical as
it may sound, “basic science” isn’t nearly as productive of new inventions as
we tend to think.
Suppose Thomas Edison had died of an electric shock
before thinking up the light bulb. Would history have been radically different?
Of course not. No fewer than 23 people deserve the credit for inventing some
version of the incandescent bulb before
The same is true of other inventions. Elisha Gray and
Alexander Graham Bell filed for a patent on the telephone on the very same day.
By the time Google came along in 1996, there were already scores of search
engines. As Kevin Kelly documents in his book “What Technology Wants,” we know
of six different inventors of the thermometer, three of the hypodermic needle,
four of vaccination, five of the electric telegraph, four of photography, five
of the steamboat, six of the electric railroad. The history of inventions,
writes the historian Alfred Kroeber, is “one endless chain of parallel
instances.”
It is just as true in science as in technology.
Boyle’s law in English-speaking countries is the same thing as Mariotte’s Law
in French-speaking countries. Isaac Newton vented paroxysms of fury at
Gottfried Leibniz for claiming, correctly, to have invented the calculus
independently. Charles Darwin was prodded into publishing his theory at last by
Alfred Russel Wallace, who had precisely the same idea after reading precisely
the same book, Malthus’s “Essay on Population.”
Increasingly, technology is developing the kind of
autonomy that hitherto characterized biological entities. The Stanford
economist Brian Arthur argues that technology is self-organizing and can, in
effect, reproduce and adapt to its environment. It thus qualifies as a living
organism, at least in the sense that a coral reef is a living thing. Sure, it
could not exist without animals (that is, people) to build and maintain it, but
then that is true of a coral reef, too.
And who knows when this will no longer be true of
technology, and it will build and maintain itself? To the science writer Kevin
Kelly, the “technium”, his name for the evolving organism that our collective
machinery comprises, is already “a very complex organism that often follows its
own urges.” It “wants what every living system wants: to perpetuate itself.”
By 2010, the Internet had roughly as many hyperlinks
as the brain has synapses. Today, a significant proportion of the whispering in
the cybersphere originates in programs, for monitoring, algorithmic financial
trading and other purposes, rather than in people. It is already virtually
impossible to turn the Internet off.
The implications of this new way of seeing
technology, as an autonomous, evolving entity that continues to progress
whoever is in charge, are startling. People are pawns in a process. We ride
rather than drive the innovation wave. Technology will find its inventors,
rather than vice versa. Short of bumping off half the population, there is
little that we can do to stop it from happening, and even that might not work.
Indeed, the history of technological prohibitions is
revealing. The Ming Chinese prohibited large ships; the Shogun Japanese,
firearms; the medieval Italians, silk-spinning; Americans in the 1920s,
alcohol. Such prohibitions can last a long time, three centuries in the case of
the Chinese and Japanese examples, but eventually they come to an end, so long
as there is competition. Meanwhile, elsewhere in the world, these technologies
continued to grow.
Today it is impossible to imagine software
development coming to a halt. Somewhere in the world, a nation will harbor
programmers, however strongly, say, the U.N. tries to enforce a ban on software
development. The idea is absurd, which makes my point.
It is easier to prohibit technological development in
larger-scale technologies that require big investments and national
regulations. So, for example,
And if there is no stopping technology, perhaps there
is no steering it either. In Mr. Kelly’s words, “the technium wants what
evolution began.” Technological change is a far more spontaneous phenomenon
than we realize. Out with the heroic, revolutionary story of the inventor, in
with the inexorable, incremental, inevitable creep of innovation.
Simultaneous discovery and invention mean that both
patents and Nobel Prizes are fundamentally unfair things. And indeed, it is
rare for a Nobel Prize not to leave in its wake a train of bitterly
disappointed individuals with very good cause to be bitterly disappointed.
Patents and copyright laws grant too much credit and
reward to individuals and imply that technology evolves by jerks. Recall that
the original rationale for granting patents was not to reward inventors with
monopoly profits but to encourage them to share their inventions. A certain
amount of intellectual property law is plainly necessary to achieve this. But
it has gone too far. Most patents are now as much about defending monopoly and
deterring rivals as about sharing ideas. And that discourages innovation.
Even the most explicit paper or patent application
fails to reveal nearly enough to help another to retrace the steps through the
maze of possible experiments. One study of lasers found that blueprints and
written reports were quite inadequate to help others copy a laser design: You
had to go and talk to the people who had done it. So a patent often does not
achieve the openness that it is supposed to but instead hinders progress.
The economist Edwin Mansfield of the
Politicians believe that innovation can be turned on
and off like a tap: You start with pure scientific insights, which then get
translated into applied science, which in turn become useful technology. So
what you must do, as a patriotic legislator, is to ensure that there is a ready
supply of money to scientists on the top floor of their ivory towers, and lo
and behold, technology will come clanking out of the pipe at the bottom of the
tower.
This linear model of how science drives innovation
and prosperity goes right back to Francis Bacon, the early 17th-century
philosopher and statesman who urged
Yet recent scholarship has exposed this tale as a
myth, or rather a piece of Prince Henry’s propaganda. Like most innovation,
Terence Kealey, a biochemist turned economist, tells
this story to illustrate how the linear dogma so prevalent in the world of
science and politics, that science drives innovation, which drives commerce, is
mostly wrong. It misunderstands where innovationn comes from. Indeed, it
generally gets it backward.
When you examine the history of innovation, you find,
again and again, that scientific breakthroughs are the effect, not the cause,
of technological change. It is no accident that astronomy blossomed in the wake
of the age of exploration. The steam engine owed almost nothing to the science
of thermodynamics, but the science of thermodynamics owed almost everything to
the steam engine. The discovery of the structure of DNA depended heavily on
X-ray crystallography of biological molecules, a technique developed in the
wool industry to try to improve textiles.
Technological advances are driven by practical men
who tinkered until they had better machines; abstract scientific rumination is
the last thing they do. As Adam Smith, looking around the factories of
18th-century Scotland, reported in “The Wealth of Nations”: “A great part of
the machines made use in manufactures…were originally the inventions of common
workmen,” and many improvements had been made “by the ingenuity of the makers
of the machines.”
It follows that there is less need for government to
fund science: Industry will do this itself. Having made innovations, it will
then pay for research into the principles behind them. Having invented the
steam engine, it will pay for thermodynamics. This conclusion of Mr. Kealey’s
is so heretical as to be incomprehensible to most economists, to say nothing of
scientists themselves.
For more than a half century, it has been an article
of faith that science would not get funded if government did not do it, and
economic growth would not happen if science did not get funded by the taxpayer.
It was the economist Robert Solow who demonstrated in 1957 that innovation in
technology was the source of most economic growth, at least in societies that
were not expanding their territory or growing their populations. It was his
colleagues Richard Nelson and Kenneth Arrow who explained in 1959 and 1962,
respectively, that government funding of science was necessary, because it is
cheaper to copy others than to do original research.
“The problem with the papers of Nelson and Arrow,”
writes Mr. Kealey, “was that they were theoretical, and one or two troublesome
souls, on peering out of their economists’ aeries, noted that in the real
world, there did seem to be some privately funded research happening.” He
argues that there is still no empirical demonstration of the need for public
funding of research and that the historical record suggests the opposite.
After all, in the late 19th and early 20th centuries,
the
The true lesson, that Sputnik relied heavily on
Robert Goddard’s work, which had been funded by the Guggenheims, could have
gone the other way. Yet there was no growth dividend for
In 2003, the Organization for Economic Cooperation
and Development published a paper on the “sources of economic growth in OECD
countries” between 1971 and 1998 and found, to its surprise, that whereas
privately funded research and development stimulated economic growth, publicly
funded research had no economic impact whatsoever. None. This earthshaking
result has never been challenged or debunked. It is so inconvenient to the
argument that science needs public funding that it is ignored.
In 2007, the economist Leo Sveikauskas of the U.S.
Bureau of Labor Statistics concluded that returns from many forms of publicly
financed R&D are near zero and that “many elements of university and
government research have very low returns, overwhelmingly contribute to
economic growth only indirectly, if at all.”
As the economist Walter Park of American University
in
To most people, the argument for public funding of
science rests on a list of the discoveries made with public funds, from the
Internet (defense science in the U.S.) to the Higgs boson (particle physics at
CERN in Switzerland). But that is highly misleading. Given that government has
funded science munificently from its huge tax take, it would be odd if it had
not found out something. This tells us nothing about what would have been
discovered by alternative funding arrangements.
And we can never know what discoveries were not made
because government funding crowded out philanthropic and commercial funding,
which might have had different priorities. In such an alternative world, it is
highly unlikely that the great questions about life, the universe and the mind
would have been neglected in favor of, say, how to clone rich people’s pets.
The perpetual-innovation machine that feeds economic
growth and generates prosperity is not the result of deliberate policy at all,
except in a negative sense. Governments cannot dictate either discovery or
invention; they can only make sure that they don’t hinder it. Innovation
emerges unbidden from the way that human beings freely interact if allowed.
Deep scientific insights are the fruits that fall from the tree of
technological change.
Mr. Ridley is the author of
“The Evolution of Everything: How New Ideas Emerge,” to be published next week
by Harper (which, like The Wall Street Journal, is owned by News Corp). He is a
member of the British House of Lords.
http://www.wsj.com/articles/the-myth-of-basic-science-1445613954