Everyone knows that the United States is the leading developer of new pharmaceutical drugs. But I was surprised to learn it was not always that way. In fact, in the 1980s, Europe dominated the global pharmaceutical market. How did this change?
Jonathan Miltimore, writing in The Epoch Times, gives us an important history lesson on the pharmaceutical industry. He says, “It may surprise some today that a European company launched the world’s first AIDS treatment, but in the 1980s, it made sense: Europe dominated the global pharmaceutical market, introducing 129 novel drugs in the late ’80s, compared with just 77 in the United States.”
Europe no longer rules the Rx roost, however. Today, the U.S. dominates pharmaceutical innovation and sales, accounting for roughly half of all new drugs, compared with just 22 percent for European firms.
Sally Pipes’s new book “The World’s Medicine Chest: How America Achieved Pharmaceutical Supremacy―and How to Keep It” (2025, Encounter Books), examines the policies and economic forces that helped the United States overtake Europe as the global leader in drug innovation.
To say that the U.S. won pharmaceutical supremacy might be generous. Pipes, president of the free market Pacific Research Institute, spends the first few chapters of her book showing how European countries fumbled away their dominance through bad policies, particularly a fondness for price controls.
Price controls have been failing for thousands of years, and Pipes shows in painstaking detail how these policies destroyed the incentive structure necessary for drug innovation.
To be fair to Europeans, their decision to resort to price controls didn’t happen in a vacuum. They were largely the byproduct—a “natural consequence,” in Pipes’s words—of a different government scheme: universal health care.
Within a decade of establishing the National Health Service, the UK introduced the Voluntary Price Regulation Scheme (1957), a policy framework designed to control the prices of prescription drugs. The bureaucracy’s role soon expanded to include determining what was a “reasonable” amount of profit for a company to make. France and Germany followed, passing their own price control schemes in the late 1980s and early 1990s, and by 2004, every European country had price controls on prescription drugs.
Pipes shows that the sky didn’t fall in Europe immediately. For a while, everything seemed fine. Europeans enjoyed cheaper drugs, although accessibility was occasionally an issue because of price controls. Meanwhile, the continent continued to enjoy pharmaceutical dominance, outpacing the United States in pharmaceutical research and development (R&D) spending, employment, and sales.
As late as 1995, European companies still made up half of the world’s 20 largest pharmaceutical firms by revenue. Yet the writing was already on the wall. The golden age of European pharma was coming to an end, a development that even the EU had anticipated. “It is hard to escape the conclusion that the United States, rather than Europe, is now the main base for pharmaceutical research and development for therapeutic innovation,” the European Commission glumly concluded in 1994.
The EU Commission was not incorrect in its pessimism. By 2002, U.S. companies held claim to 60 percent of global pharmaceutical profits, compared with less than 20 percent for European companies. By 2004, the United States was attracting 80 percent of total R&D spending.
To her credit, Pipes considers other possible explanations for Europe’s pharmaceutical stagnation, including the idea that it stems from the United States’ “world-leading university system.” But as she notes, the data complicate that theory: European scientists published 120,000 pharmaceutical papers from 2017 to 2019—far more than the 72,000 published in the United States—yet relatively little of it translates into viable drugs or commercial breakthroughs (at least in the European market).
Why did this happen? We’ll discuss that in our next post.