Until recently, when I read the story of how American trade became institutionalized in the 19th century, changing both physical and economic landscapes, I had never given much thought to the culture of the pre-business world. When commerce was done face-to-face, there was no need for government oversight: men were simply bound by an inherent moral code. But as business owners became more and more removed from their customers, deceit and greed were allowed to creep in.
Harvey Wiley, an early champion of government regulation, was stunned by the accepted nature with which businesses would cut corners to make cheaper products, and then lie about doing so. Many companies had started synthetically manufacturing glucose and calling it sugar, for example, despite the fact that it lacked the flavors that true plant-derived sugars contained. Other examples, particularly of adulterated medicines, are more egregious.
In a thus-far excellent book I’ve just started reading, Protecting America’s Health, author Phillip Hilts writes, “The corporations were developing a reputation not only for lack of accountability, but also for ruthlessness in competition and hardness toward their workers. There was a fear that the money-centered values of the great combines [corporations] and their owners would soon displace personal decency and honor.”
So began, in earnest, the discussion of government regulation over business. Corporations, and those that stood to gain from their success, not surprisingly opposed the idea. But those who favored regulation called for government enforcement of honorability and fairness. They believed that “. . . business had shown in the nineteenth century it could not well serve two masters—it could not seek profit with a single-minded energy and at the same time take care that citizens were protected from the injustices and injures that its actions or products might cause,” Hilts writes. “The new kind of business could not, in other words, honestly police itself.”
* * *
The FDA—founded by the 1906 Pure Food and Drug Act as the Bureau of Chemistry, with Harvey Wiley as its chief chemist—was a direct result of the national uproar against the rampant sale of adulterated foods and drugs. The agency has gained more power over the years—often in response to tragedy—and has grown to some 14,000 employees around the country. But that’s still a relatively small agency, with an even weaker budget, to oversee some 95,000 businesses selling about $1 trillion of goods each year.
Hilts’s conclusion is that, especially in light of its limited resources, the FDA has done a damn good job of keeping the populace safe. But more and more, the agency is feeling the pressure to bring drugs to market sooner.
“They are in a tough spot and that is our fault,” says medical physicist and ALS patient Ben Harris. “They are damned if they do and if they don’t.” Indeed, if the end goal of the FDA is to save lives, the agency must consider the patients who die before a life-saving drug gains approval. On the flip side, approving a drug too soon risks inflicting harm. Furthermore, one failed product can devastate other drugs in development, if the company is unable to sustain the loss, stalling or halting programs that might well have been successful. Clearly, the equation is not simple.
* * *
While the business community of the early 20th century fought the creation of a government regulatory body, today’s industry embraces the FDA. Companies rely on the agency to keep them in good standing with the public—particularly after something goes wrong, like the discovery of some adulterated products. By performing regular screens, the FDA reassures consumers of the products’ safety and purity.
In the biomedical world, companies are hesitant to allow their experimental drugs to be used before they can be ushered through the appropriate trials to gain market approval. More people using a drug increases the chances that an adverse effect will arise—stalling or even derailing a drug development program. With hundreds of millions, often billions of dollars invested, companies understandably want to reduce those risks until they can start to recoup some of their losses.
The FDA is similarly wary of condoning more widespread access to drugs that have yet to complete the full battery of clinical tests required for marketing approval, and while the agency continues to create expedited pathways to market, it still follows an overwhelmingly precautionary approach.
So what’s the solution? Should terminally ill patients be granted early access to experimental treatments? Who should make those decisions?
Some think that the DIY model of clinical research, while extremely new and unproven, could be one answer. By going outside of the regulatory system, DIY trials are not subject to the lengthy FDA review process. And by yielding results in a fraction of the time, the approach could save both lives and money. “I don’t think the FDA model will ever go away,” says Harris—“DIY will not replace it.” But he believes the online environment serves as “a form of oversight,” he says. “[The patient community on the internet] seems to me to be very good at self-regulating.”