Why Do Americans Take So Many Prescription Drugs? J. Douglas Bremner, MD
Scope of the Problem
"Today we are faced with what may be the single greatest drug safety catastrophe in the history of this country or the history of the world… In my opinion, the FDA has let the American people down, and sadly, betrayed a public trust."
Those ominous words, spoken by David J. Graham, M.D., M.P.H., Associate Director of Drug Safety for the Food and Drug Administration (FDA), on November 18, 2004, were part of Congressional testimony concerning the dangers of Vioxx. The drug had just been taken off the market because of evidence that the arthritis medication increased the risk of heart attacks. Yet he could have been talking about the prescription drug industry in general, especially since he noted that Vioxx was not the only medication that posed serious health threats. It was only the tip of the iceberg. Graham identified five widely-prescribed drugs still on the market that are particularly dangerous, including Accutane, Bextra, Crestor, Meridia, and Serevent (in 2005 Bextra was taken off the market).
While it’s true that many drugs help people live longer and better lives, myriad others may hurt you in other ways that you don’t know about. Dr. Graham’s testimony provided the public a fleeting glimpse at that knowledge, normally hidden from view or frustratingly difficult for the average person to access. Pharmaceutical and supplement manufacturers have to increase sales and profits, as all businesses must, and they do so in part by developing drugs to treat disease and also by convincing people they need meds to prevent disease or lessen the perceived risk of future illness. The result is that nondisclosure of potentially harmful side effects of the drugs they makehas become routine.
How We Got Here
The latest drive to get new pills on the shelves and into people’s mouths began when government deregulation and an earnest attempt to help AIDS-HIV patients access important life-extending drugs collided. In the 1980s there was a strong movement to decrease the role of government regulation in all businesses, and budgets of regulatory agencies like the FDA were slashed as part of that effort. The Reagan Administration painted the FDA as a bloated bureaucracy that was slowing down the approval of drugs and getting in the way of business.
There was some truth to that claim. At that time it could take up to two years to gain drug approval, two years too long if you were suffering from HIV-AIDS. Throughout the 1980s, AIDS activists and patients echoed the drug companies’ sentiments, complaining that it took too long to bring disease-fighting drugs to market. The pharmaceutical industry lent a sympathetic ear and a loud voice to calls for speed in approvals of AIDS drugs such as Agenerase (amprenavir). Since drugs are on patent for a limited number of years, every year spent waiting for approval from the FDA meant losing a year of profits.
Couple that with the fact that the FDA could now honestly say that, because of cuts, it was understaffed. The answer was essentially legislation allowing pharmaceutical companies to pay the salaries of the staff at the FDA. In 1992, the Prescription Drug User Fee Act (PDUFA) stipulated that a fee (now $576,000) be paid to the FDA by the pharmaceutical companies for each new drug application. The number of staff at the Center for Drug Evaluation and Research (CDER) doubled overnight. Today, the FDA receives about $260 million a year from these fees. Part of the bill stipulated that funding by Congress for new drug evaluations had to increase by 3% per year. Since the overall funding for the FDA did not increase at 3% per year, the FDA had to actually cut funding for surveillance and research of approved drugs.
Another interesting phenomenon resulted from the change in law: the boundaries between the drug companies, FDA, and doctors became increasingly blurred. FDA officials sometimes move to jobs in the pharmaceutical industry, which means they may not want to burn their bridges with industry. The same FDA officials who approve the drugs are responsible for monitoring them after they are on the market, which gives them an obvious disincentive to say that the drugs they earlier certified as safe were now unsafe. Finally, the FDA gets input from outside advisory panels made up of doctors who are experts in their fields. Most of these doctors receive payments as consultants, research grants and support for travel to conferences from drug companies. In some cases, the doctors are working as paid consultants to the same companies whose drugs are coming up for approval by their advisory committees.
For instance, as reported by USA Today on October 16, 2004 (“Cholesterol Guidelines Become a Morality Play”) eight of the nine doctors who formed a committee in 2001 to advise the government on cholesterol guidelines for the public were making money from the very same companies that made the cholesterol lowering drugs that they were urging millions of Americans to take. For example, one of the committee members, Dr. H. Bryan Brewer, was the Chief of the Molecular Disease Branch at the National Institute of Health. He worked as a consultant or speaker for 10 different pharmaceutical companies, making over $100,000 over three years while he was on the committee, and sat on one of their boards (Los Angeles Times, December 22, 2004, “The National Institutes of Health: Public Servant or Private Marketer?”). Dr. Brewer left the NIH in 2005 in the midst of adverse publicity about potential conflicts of interest. Nassir Ghaemi, MD, a psychiatrist at Emory University, was quoted in the Emory Academic Exchange (February, 2007) as saying, “Critics say we are being influenced and don’t realize it—that drug companies are smarter than we are and know a lot more about human psychology than we think, and they’re probably right about that to some extent.”
Expert consensus guidelines have a potent effect on doctors; they can be held liable if they do not adhere to accepted standards of care. Dr. Curt D. Furberg, a former head of clinical trials at the National Heart, Lung, and Blood Institute and now a professor at Wake Forest University in North Carolina, explained how such information reached physicians: “The [company] reps tell the doctors, ‘You should follow these guidelines,’ implying that you’re not a good doctor if you don’t follow these guidelines.” (Los Angeles Times, December 22, 2004, “The National Institutes of Health: Public Servant or Private Marketer?”)”
The result of this co-mingling was a boon for drug makers, approval time of their products decreased from 20 months to six months right after the law changed. However, the number of drugs that had to be later withdrawn also increased from 2% of drugs to 5% of drugs.
There is another troubling dichotomy that could have terrible repercussions for our health: while the number of people with disease is not growing, the number of adult Americans taking medication is increasing – half of us take prescription drugs and 81% of us take at least one kind of pill everyday – and that percentage is expected to rise in the coming years. To gain the most market share, companies have to invent drugs for diseases that previously had no treatment (or treat problems that may not necessarily require drug treatment, such as “restless leg syndrome”), or create prevention medications for alleged risks (like the risk of fracture in the elderly) by expanding the potential pool of medication takers. That meant moving from the realm of giving medications to sick people, to giving medications to people who looked well, but might be at an increased risk based on the result of a blood test or some other hidden marker of disease. Thus the era of disease prevention and risk factor modification was born.
To promote this shift, for the past two decades the pharmaceutical industry has pushed educational programs, which they claim are designed to identify people in need of treatment or prevention with medication. This is usually done by donating money to organizations who advocate on behalf of a specific disease who will in turn “get the word out,” increasing public awareness and screening, and expanding the number of individuals who will potentially take the medication. This is fine for identifying individuals with undiagnosed high blood pressure or to detect the early stages of colon cancer. But awareness campaigns are not always meant to be purely, altruistically educational. Most are linked to a drug company’s marketing campaign.
There are a number of conditions for which we are now urged to obtain screening and potential treatment, including high cholesterol, osteoporosis, hypertension, diabetes, and undetected heart disease. However, the potential benefit of medications to treat these conditions is often exaggerated, side effects are minimized, and in some cases recommendations are applied to people based on evidence from different groups of people (e.g. women with risk factors for heart disease are urged to take cholesterol lowering medications based on studies in men). In addition, doctors who work as paid consultants to the pharmaceutical industry often write the guidelines about who should take the drugs, so it is unclear how unbiased their recommendations really are.
Another factor that has expanded use of prescription medications happened in 1997, when the FDA lifted the ban on direct to consumer advertising along with the law that required ads to list every possible side effect. Soon after, Americans were bombarded daily with commercials for prescription drugs. The US is the only country in the world where you can turn on the TV and have an announcer tell you to go ‘ask your doctor’ for a drug. Doctors often will give medications to patients even if they don’t think they need it. For example, one study showed that 54% of the time doctors will prescribe a specific brand and type of medication if patients ask for it.
A Bleak Diagnosis
With so many of us popping pills or gulping down spoonfuls of medicine, it’s not surprising that more of us report related adverse effects. One hundred thousand Americans die every year from the effects of prescription medications. Over a million Americans a year are admitted to the hospital because they have had a bad reaction to a medication. About a quarter of the prescriptions that doctors write for the elderly have a potentially life threatening error. Many of these people are getting medications that they don’t need, or for problems that can be appropriately and safely addressed without drugs. For example, most cases of adult onset diabetes can be prevented and possibly cured with a change in diet alone – and with considerably fewer negative side effects and numerous healthy ones, like weight loss, and lower blood pressure and cholesterol.
In 2005 in the aftermath of the Vioxx debacle and withdrawal from the market, the Institute of Medicine was asked to provide recommendations for ways to improve drug safety. As part of this process they interviewed Janet Woodcock, Deputy Commissioner of Operations at the FDA. As reported by The New York Times on June 9, 2005 (“Drug Safety System is Broken, Top FDA Official Says”), she told them that the nation’s drug safety system had, “pretty much broken down.” She went on to say that “the keystone of the current system is the prescriber, and that person is the one who decides if the benefits of a drug outweigh the risks for that patient. This system has obviously broken down to some extent as far as the fully informed provider and the fully informed patients.” She charged that neither doctors nor patients had enough information about the side effects of drugs to make informed decisions about taking them. Dr. Woodcock went on to say that, “the bottom line is that a lot of drug safety problems are actually preventable, [because] most adverse events are from known side effects.”
Unfortunately, your doctor may not be able to provide you with all the details on side effects. They aren’t hiding anything; they just can’t keep up with new information. There are over 5,000 medical journals; each publishes 20 articles a month, meaning there a million articles published each year. It’s impossible for anyone to read all of this, let alone a busy general practitioner or internist, or even a specialist, who are often buried by insurance forms and HMO paperwork. Most of the information doctors receive is distilled versions of research results that are assembled by the pharmaceutical industry and distributed through promotional materials and the product representatives that visit doctors’ offices. Legitimate publications are distributed, but papers that are not favorable are ignored, and favorable data within papers are highlighted to the exclusion of less favorable data.
In addition drug companies hire academic physicians to give lectures, but require them to show only the slides that have been approved by the company. The companies support “grand rounds” lectures (the traditional lectures given by outside speakers to the entire department) at universities, but retain the option of approving or disapproving the speakers. I know about this first hand because I have personally been affected by these policies. I fought to not use company-approved slides and was dropped as a speaker, and I was not approved as a grand rounds speaker, and the university where I was lecturing had to find funds from other sources to pay for me to give grand rounds.
Marcia Angell wrote about other ways drug companies distort the flow of information to doctors about the risks and benefits of medications in her excellent book, The Truth About the Drug Companies. She contends that doctors get most of their information about drugs during the weekly visits from drug company product representatives who are typically young, attractive women with no background in health or science; in fact, as reported by The New York Times on November 28, 2005, drug companies often recruit former college cheerleaders for this job (“Gimme an Rx! Cheerleaders Pump Up Drug Sales”).
Reps are sent into the field with a list of talking points to help them answer questions, as well as packets of product-favorable articles and other material such as copies of expert consensus guidelines (created by their paid consultants) to leave with doctors. These articles often have critical information buried in tables without comment, or assert conclusions that are not supported by the data in the paper.
Drug companies also buy information about the medications that doctors prescribe from major chain drug stores like CVS, and then use this information to reward doctors who prescribe their drugs frequently, with trips to resorts and other perks. Drug companies also lavish dinners, gifts, and paid trips to conferences on doctors. Research studies show that, although doctors deny that the perks have any effect on their prescribing practices, there are changes in objective measures, like how often a doctor will try to have a drug from that particular company put on his hospital’s formulary.
Do We Get Our Money’s Worth?
I’m not saying that some drugs don’t ever successfully prevent disease, or that newly described diseases and syndromes are necessarily invalid. But the fact is that no matter how you look at it, the US (and to a lesser extent other countries) has a prescription drug problem. The US spends two times more on drugs, and takes twice as many drugs, as other countries, and has worse health. That means we are paying money for drugs that are not working for us.
Despite the fact that Americans spend twice as much on health care as any other country in the world, we have some of the worst healthcare outcomes in the industrialized world, including total life expectancy, and survival of children to their 5th birthday. In a survey of 13 industrialized nations, the US was found to be last in many health-related measures, and overall was 2nd to the last. Countries with the best health care included Japan, Sweden, and Canada, in that order. Factors that were thought to explain worse healthcare outcomes in the US included the lack of a developed and effective primary care system and higher rates of poverty. Even England, which has higher rates of smoking and drinking and a fattier diet, has better health than the US.
It is no accident that we are paying the most money and getting the worst healthcare. In Overdosed America: The Broken Promise of American Medicine, author John Abramson, M.D. says we are pouring money into expensive drugs and medical devices that have marginal value over more economical alternatives. Meanwhile we neglect the development of things like primary care that can have a real impact on health. Forty-three million Americans go without insurance, and that number is growing. We are paying a lot of money for health care we may never even receive, through the rising costs of individual health insurance, health care benefits that drive companies into the ground, expensive Medicare Drug Benefits, and Medicaid costs that cannot be controlled.
Many of the aforementioned expenses are related to expensive drugs that we often don’t need, that are no more effective than older alternatives, or that are simply not as valuable as drug companies make them out to be. For example, studies have shown that peasants in Indian villages with the diagnosis of schizophrenia who get intermittent doses of chlorpromazine, the original antipsychotic that is dirt cheap, together with support from their family, actually do better in terms of having fewer psychotic symptoms than Americans who get expensive new generation anti-psychotics and traditional Western psychiatric care. Another example is Nexium, “the purple pill,” which works no better for gastric reflux than the older medications like Prilosec, even though it costs much more.
Drugs cost twice as much in the US as in Canada or Europe. A year of treatment with many medications can cost up to $3000. Billy Tauzin, President of the Pharmaceutical Research and Manufacturer’s Association (PhaRMA), the lobbying organization for the drug companies, in response to efforts to regulate the content of TV ads for drugs, was quoted by The New York Times (May 17, 2005) as saying, “We don’t make ice cream or handbags or automobiles, we make products that save lives.”(Drug Industry is Said to Work on Ad Code”).
The argument drug manufacturers make for the high cost of their products, which has become an old saw by now, is that the money supports research and development of new life-saving meds. And they also say that expensive advertising is needed not to sell drugs, but to educate doctors and patients. Indeed, a whopping 80% of their budgets is used for marketing.
The major drug companies don’t develop a lot of new drugs. The truth is, most new drugs are developed through basic science research performed in universities, and not in drug company laboratories. University scientists receive research grants from the National Institute of Health (NIH). The NIH is supported by money from taxes. Take the case of the Cox-2 inhibitors, like Vioxx. The mechanisms of Cox-2 inhibition that led to the development of the Cox-2 inhibitors were discovered at a university by researchers supported by taxpayer’s dollars.
In order to keep making money, companies are under enormous pressure to create new drugs they can patent and sell without competition for 20 years, after which patents run out and generic (cheaper) versions go to market. In fact, there really aren’t a lot of truly new drugs being developed these days. Most pharmaceuticals touted as new are essentially the same as other drugs in their class, with a slight chemical modification that allows the company to have a unique patent; these are called “me too” drugs.
Once the drug companies have developed a new drug, they patent it and begin clinical trials in the hopes of gaining FDA approval for its use. In order to get approval, they must perform two multi-center randomized placebo or sugar pill controlled studies demonstrating that their drug is better than nothing. This means that patients get randomly assigned to either the drug or placebo for, say, three months, and neither the doctors nor the patients know what they are on. This is the gold standard for evaluating risks and benefits of drugs, and is required to definitively evaluate drugs, as well as alternative treatments.
The placebo response is essentially how much better you do if you take a pill that you believe helps you, even if it really does nothing in terms of its actual effect on your body. At the end of the study the “blind” is removed and the doctors look to see if the drug was better than the placebo in improving the symptoms of the disease, or preventing some pre-defined event, like a heart attack. The company must do at least two studies showing the drug is better than placebo. If they did eight studies and only two showed that the drug was better than placebo that is good enough.
Because the drug companies are only required to show that the drugs are better than nothing we usually never know whether they are better than older drugs the new versions seek to replace. It is usually left to the marketing people to generate enthusiasm, through TV ads, product representative visits to doctor’s offices, and sponsored lectures, that the new drugs are safer or better than the old drug. They do this by picking some aspect of the drug’s properties that theoretically makes it better.
For example, when tricyclic antidepressants went off patent, the new generation of drugs was selective serotonin reuptake inhibitors, or SSRIs. Even though SSRIs were never shown to be better at treating depression than the old drugs, it was argued that because the SSRIs were more specific in blocking serotonin uptake, instead of a non-specific blockage of serotonin, norepinephrine, and other chemicals, that they would be more effective with fewer side effects. The same argument was made for the COX-2 inhibitors, like Vioxx, which were said to more specifically inhibit the COX receptor involved in pain, unlike the non-specific Non-steroidal Anti Inflammatory Drugs (NSAIDs).
The head of the American Psychiatric Association recently bemoaned the fact that psychiatrists had gone from the “bio-psycho-social” model to the “bio-bio-bio” model. Us doctors have become mesmerized with the idea that all depressions are caused by imbalances of serotonin that can be fixed only with a drug that acts on serotonin. However most cases of depression are caused by life traumas, spiritual upheavals, and other jolts along the road of life. That isn’t to say that these changes aren’t accompanied by changes in brain chemistry: it is both. But I think it is time that we acknowledge the role of emotion and spirituality in mental disorders. It only makes sense.
Merrill Goozner, M.D.’s book The $800 Million Pill: The Truth Behind the Cost of New Drugs, says that charging a lot for patented medications is unnecessary to pay for developing future drugs. The second generation of drugs for a particular disorder often will cost as much as ten times as much as the old drugs that have gone off patent. The few studies that did do direct comparisons usually did not show any improvement in efficacy over the old drugs. For example, the newer antidepressant drugs like Prozac have never been shown to work better than the older tricyclic antidepressant drugs.
Sometimes new drugs are found to have consequences much worse than older alternatives; when negative consequences come up, the companies typically resist admitting it for as long as possible. For instance, the painkiller Vioxx was a second-generation drug that was never shown to be better for pain relief than the old painkiller, Advil, which could be bought for a fraction of the cost, and over the counter. However Vioxx was marketed as having a lower risk of gastrointestinal bleeding. After the drug had been on the market for many years it was discovered that it increased the risk of heart attack by several fold (see Chapter Two). Tens of thousands of people died unnecessarily taking Vioxx, and to make matters worse, they had to pay a lot more money for the privilege. What this shows is that the FDA should require companies to test new drugs against old ones, and compare both efficacy and side effects.
Given medical scares like Vioxx it’s not surprising that Americans have become wary of the FDA and drug companies, and both of their public images are beginning to suffer. The Economist reported November 24, 2004 (“Lessons For Pharma From Tobacco”) that less than 50% of us perceive drug companies as “favorable.” That’s only slightly above the low favorable ratings we give oil and tobacco companies.
Another reason why our confidence has been shaken is because of the common defense against charges of drug toxicity that drug companies use: “it was approved by the FDA.” There has even been proposed legislation to make a law that drug companies cannot be liable for drug safety problems if the FDA has approved the drug. The FDA is so paralyzed by politics, and the balance they want to achieve between scientific advancement, commerce, and safety, that it could be letting down its guard. For instance, Daniel Troy, the Chief Counsel for the FDA under George W. Bush in 2004 was a political appointee who formerly worked in a Washington law firm defending the interests of pharmaceutical companies. As reported by Drug Store News (December 22, 2004 “FDA Chief Counsel Resigns”)he worked as a “friend of the court” on cases where pharmaceutical companies had been sued for drug safety problems. The logic was that the FDA approved the drug and therefore had an interest in the outcome.
If you are like many Americans who is prescribed a drug or who loves someone who has been, you hop online to research and read about it (and the circumstances that warranted the prescription in the first place), and spend many frustrating hours coming up with little useful information. Worse, you may unwittingly be accessing information on the Internet that is not medically sound or is just anecdotal reports from individual consumers. In fact, research studies show that one out of four medical information web sites offer information that is inaccurate or misleading, and only one out of five are authored by identifiable medical experts.
The book on every doctor’s shelf, the Physicians Desk Reference (PDR), provides detailed information about drug side effects and drug interactions, but is based on product inserts that go into packages of drugs the FDA has approved. New information obtained on the millions of patients treated with drugs after they come onto the market is not incorporated into annual versions of the PDR. Since most of the consumer reference books on drugs are simply over the counter versions of the PDR, these books also do not include data on the millions of people who receive the drug after it comes on the market.
Many Americans have become disgusted with prescription drugs and the American medical establishment, who seem to be conspiring with what I call the Gang of Four (hospitals, insurance companies, the AMA and drug companies) to keep Americans sick and poor. And so they turn to alternative medicine, who frequently promote vitamins, herbs and supplements. And yet these promoters of alternative remedies are not always the peaceful, benign, and well intended people they make themselves out to be. In fact research has shown that some of the vitamins and supplements pose serious safety hazards, hazards that you may be unaware of. We have over indulgently endorsed the makers of vitamins and supplements. Those companies promote themselves as healthy alternatives to prescription medications. Many doctors take a hands-off approach to vitamins, or have the attitude that if they don’t do any harm it’s okay to take them.
However vitamins and supplements can and do indeed cause harm. And unfortunately the government has contributed to misinformation about vitamins and supplements. The US Department of Agriculture (USDA), whose job it is to promote the interests of agriculture (i.e., makers of meat and milk) and not health, regulate foods and beverages. Vitamins and supplements are classified as foods, not drugs. Lobbyists for the vitamin and supplements industry have blocked efforts by the Department of Health and Human Services (DHHS), the federal agency responsible for health, to get involved.
The USDA’s Recommended Daily Allowance (RDA) of vitamins and minerals has been great for the vitamin and supplement industry, as well as cereal makers who supercharge sales by adding vitamins, and minerals to breakfast foods, and then convince customers they need to eat these fortified products to get their minimal daily requirements of vitamins and minerals. This is despite the fact that there is no way to get enough of the recommended vitamins and minerals in normal food without overeating. Government recommendations are actually four times higher than what you really need. The fact is that you don’t need extra vitamins, and that if you stick with fresh vegetables and fruit and other whole foods, you will stay healthy. Those making big money on vitamins and supplements are often doing so at the expense of your pocket book, and sometimes your life.
All this is not to say that many medications have not changed life for the better, particularly those that treat infections. However, ironically most recent health gains have come through increased knowledge of health risks and better health practices (or prevention). We smoke less, have better access to nutritious fruits and vegetables year round, pay more attention to cleanliness and hygiene, and have improved safety in general. For instance, in the 19th century it was not known that dirty water and shared cups could spread disease. Hand washing is still the single most powerful way to prevent the spread of communicable disease, but this was not discovered until 1847, when Ignaz Semmelweis, a young Viennese doctor in an obstetrics ward, observed that midwives who washed their hands had lower mortality rates among their patients than doctors, who often went from autopsy room to delivery ward without so much as a hand wipe.
Future advances in health will likely come more from changes in lifestyle, diet and exercise, than from medications. Almost all of the chronic conditions for which pills are prescribed are preventable through such changes. Other conditions like cancer are partially preventable.
It is time for Americans to rethink the role of medications and other pills in their lives in relation to other actions that can be taken to maximize health, such as making changes in diet; incorporating exercise into one’s daily routine; learning and using stress reduction techniques; and changing other behaviors like quitting smoking. The most common disorders, like diabetes and heart disease, are always better treated and prevented through changes in diet, exercise, and lifestyle that they are with medication. Pharmaceuticals can be life saving for some conditions, such as insulin for Type I diabetes, thyroid hormone for hypothyroidism, or antibiotics for life threatening infections. All of this has been shown through several scientific studies. Before you take that pill, consider taking charge of your health by making informed decisions and smart changes in your lifestyle. In some cases, however, you may need medications for prevention or treatment of disease, or to help you with troubling symptoms or disabilities. In those cases you should know as much as you can about the risks and benefits, so that when it is time to talk to your doctor you can make an informed decision that both of you are happy with.
J. Douglas Bremner MD is a physician and researcher and author of ‘Before You Take That Pill: Why the Drug Industry May be Bad for Your Health: Risks and Side Effects You Won’t Find on the Label of Commonly Prescribed Drugs, Vitamins and Supplements,’