Conflict of Interest Human Testing and the Pharmaceutical Industry

Ask Greg Koski about recent critiques of the research world he policed for two years—critiques over conflicts of interest, lack of ethical common sense, the handling of people as if they were objects—and one gets more candor than one might expect from a Harvard-trained physician whose own colleagues have come under such fire.

One hears from Koski that research using human beings is, at its core, ethically flawed. That after a string of deaths involving participants in medical studies, the public is understandably skeptical of the motives of researchers. And that the world is just now seeing the consequences of a sometimes insidious partnership between the researchers and the companies that finance their work.

“I don’t think anybody is really surprised,” Koski says of discoveries that scientists had financial and professional conflicts that may have kept them from telling people about the risks of their research. “Most reasonable people will understand that when there are large financial stakes and no clear guidelines or rules that people will work around the margins.”

Surely such frankness says something about Koski, who has just left the helm of the Office for Human Research Protections (OHRP)—the relatively obscure federal agency that became somewhat less obscure after it investigated deaths at Johns Hopkins University and at Seattle’s Fred Hutchinson Cancer Research Center last year. Perhaps it says Koski is the independent-minded regulator that critics demand in an office whose chief concern should not be scientific discovery but the protection of human life. Or perhaps Koski is simply aware that in the hyperalert climate created by these deaths, he could not afford to appear secretive.

As researchers, universities, lawmakers, government agencies, and watchdog groups grapple with the deficiencies in law, regulation, and moral aptitude that led to these and other deaths, there is no shortage of second-guessing about what should have been done. Harder is the struggle to craft new guidelines that balance patient welfare against scientific breakthroughs that could ease suffering and save lives.

‘Lapses’

Koski’s is one organization on the front lines of this struggle. The OHRP was created in 2000 to succeed the Office of Protection for Research Risks, which operated under the National Institutes of Health. After the late 1990s revealed a string of flawed experiments—research in which patients were not fully informed about risks and in which researchers had private financial interests—Congress moved it under the Department of Health and Human Services to sever ties between the office and the research community it was policing.

Since the 52-year-old Koski took over in the autumn of 2000, he has tripled the office budget to $3.7 million and doubled the staff to 48. In the past decade, the office has shut down 25 major research projects because of lapses during medical experiments. For example:

  • At the Fred Hutchinson Center, researchers conducting cancer experiments deprived patients of information about the risks involved and the alternatives available, the Seattle Times reported last year. Doctors also did not tell them about financial interests they had in the experimental blood-cancer treatment. In one trial, 80 of 82 people died, including at least 20 from causes directly attributable to the experimental treatment.
  • At Johns Hopkins last year, Ellen Roche, an otherwise healthy woman, died of respiratory failure after breathing in a chemical that was designed to help scientists study the effects of asthma. Later, investigators uncovered evidence that the chemical could be unsafe, evidence the researcher had not found in his literature search. Moreover, the consent forms signed by Roche and other patients made the chemical seem like a benign and federally approved product.
  • At the University of Pennsylvania in 1999, 18-year-old James Gelsinger died during a gene-therapy study to treat enzyme disorders. Researcher James Wilson held a 30-percent equity stake in the company that owned the rights to license the drug that Wilson was testing.
  • Two years ago, a whistle-blower alerted authorities that her boss—a doctor researching a melanoma vaccine at the St. John Medical Center in Tulsa, Oklahoma—was playing down the vaccine’s side effects, had not properly tested the vaccine before administering it to his patients, and continued to give it to a pregnant woman despite warnings that the drug could cause birth defects. When the chairman of a medical-center panel overseeing the research learned of the lapses, he failed to share his findings with other panel members. Instead, he whitewashed the experiment in his annual report to protect the doctor.
  • After a man died during a gene-therapy experiment at the St. Elizabeth Medical Center in Boston in 2001, his family filed a wrongful-death suit, alleging that the man would never have entered the experiment had he been told about the 20 percent financial interest that the researcher and hospital had in the product being tested.

Such lapses are relatively rare in the world of human-research trials, and experiments involving people are a necessary part of the process by which researchers test promising new treatments. Nevertheless, these serious lapses have incited public demand for more oversight of the scientists and universities that conduct research on people.

“Obviously, some people are evil to the extent that they know they’re trading human life for some other goal,” says Virginia Sharpe, who directs the Integrity in Science Project at the Center for Science in the Public Interest. “It’s the utilitarian rationale for research—that the ends justify the means and that the overall benefit justifies sacrifice of individuals. That’s been true of all research in all times, including research done in Nazi Germany as well as research done here.”

Scientists who conduct experiments with federal money, or who apply to the Food and Drug Administration (FDA) for a license to market a drug, are already regulated by federal law. They must answer to an Institutional Review Board (IRB), which is usually a five-member panel of peers who ensure that experiments remain scientifically and medically sound.

But there are ways to cut corners. Motivated by the potential payoffs of their research—fame, for instance, or money from a company sponsoring an experimental drug—some doctors play down the possible side effects of their experiments. Then too, some patients are so desperate that they ignore warnings about serious side effects or possible death. But because IRBs are usually packed with the researcher’s peers, they can be more sympathetic to the doctor than to the patient.

Adil Shamoo, an OHRP board member who helped write new guidelines that are supposed to prevent conflicts of interest in research, estimates that some 40 percent of research involving perhaps four or five million people is entirely unregulated. Because these experiments are not federally funded or FDA licensed, the researchers answer to no one.

Shamoo also suspects that even the 60 percent that are federally regulated report to the OHRP only a fraction of the “adverse” events that happen during an experiment—significant side effects, for instance, or violations of research protocol. In the past decade, studies involving some 30 million Americans reported only 386 “adverse” events—a number Shamoo says is statistically impossible. The number, he says, should be closer to 50,000.

“In the field of human research protections, you only hear about a few deaths and few shutdowns [of research], but in my view, these are only the tip of the iceberg,” says Shamoo, who believes research institutes are riddled with conflicts because so much of the policing is done by peers. “When you are an employee [of an institute], your livelihood is beholden to the institute. It’s no different from Enron employees and their lawyers deciding [alone] whether or not they’re conducting business correctly.”

One of the biggest conflicts in human research occurs when companies—typically those in the pharmaceutical industry—put their financial backing behind a drug or treatment that a researcher is testing. A recent report by the Association of University Technology Managers indicates that the pharmaceutical industry pumps some $30 billion into research every year. Increasingly, researchers and their institutions have been investing in the companies or products involved in their research. Often, the institutions where research is conducted have as much, if not more, of a conflict of interest as the scientists and doctors conducting experiments.

Theoretically, medical research should remain independent of corporate influence. But it is not uncommon for the company that puts up the money for a study to help scientists design experiments, maintain control of data, and even hire professionals to write reports that the researcher merely signs.

“There’s an extraordinary amount of naiveté about the extent to which research is swayed by other influences—like money and status,” Sharpe says. “In every debate about pharmaceutical gift-giving and its effect on prescription practices, a doctor says, ‘Oh, I would never be swayed by being taken out to lunch by a pharmaceutical representative.’ Well, the pharmaceutical industry does not spend $2 billion a year on marketing efforts because they don’t work. There are incentives to enroll patients in protocols that can blind the institution and the people doing oversight to some of the risks.”

New Guidelines, No Requirements

In the wake of the deaths and other scandals at major research institutes, many universities—as well as groups representing researchers—drafted new guidelines to govern experiments involving humans. Last year, the American Association of Universities (AAU), for instance, published recommendations to its 63 member universities—including Harvard, Yale, and Columbia—on managing conflicts of interest. “Our mantra was ‘Disclose always, manage usually, prohibit if necessary,”‘ says Nils Hasselmo, president of AAU. “When humans are involved in research…special caution has to be exercised, and any financial involvement should be approved only under exceptional circumstances.”

To Hasselmo, that means “pretty much a prohibition against any financial [ties] when humans are involved.” But the association’s report offers no guidelines as to what kind of financial ties to an outside company are “reasonable.” Moreover, the recommendations are merely that. They do not have the force of regulation. “We have no enforcement mechanisms except peer pressure,” Hasselmo says. On the other hand, he says, “It’s in the interest of the universities to handle these matters effectively.”

The American Association of University Professors (AAUP) last year published a similar report. Like AAU’s guidelines, it makes broad recommendations and urges faculty to have a hand in developing standards to deal with conflicts. But again, AAUP has no enforcement power, and it offers no precise guidelines with respect to financial ties. “Plainly, we stand for the proposition that research should be…the result of faculty following the research trail wherever it may lead, and it should not reflect the prior views or agenda of the sponsoring agency,” says Jonathan Knight, associate secretary for the 45,000-member AAUP. “The problem comes when a corporation exercises veto power over whether research should go ahead at all, which we think is quite wrong.”

Sanford Chodosh is a retired researcher, an OHRP board member, and the past president of Public Responsibility in Medicine and Research. Chodosh believes these association reports were an attempt to “try to short-stop any strict regulations.” “I’m not sure they’ve actually put a lot of teeth into” their recommendations, Chodosh says. “It doesn’t help to have the wolf guarding the chicken coop.”

President George W. Bush recently disbanded the panel that Chodosh and Adil worked on—the National Human Research Protections Advisory Committee. Some suggested the committee angered the pharmaceutical industry and other researchers because it recommended tightening conflict-of-interest rules. Others suggested the committee angered religious conservatives when it failed to support an administration push to include fetuses under a federal rule pertaining to human research on newborns. The committee may be reconstructed and led by Mildred Jefferson, a medical doctor who helped found the National Right to Life Committee and who has often served as that organization’s president.

As for the OHRP, the new guidelines it proposed in January 2001 remain in draft form, still being reviewed by the public and board members. The office does not know when they might be formally adopted. Among the suggested regulations:

  • Accreditation and more resources for IRBs. The federal government created IRBs in 1974 to monitor federally funded research after a series of horrific experiments, one of which infected mentally retarded children in New York with hepatitis. But IRBs—there are some 5,000 of them at research centers across the country—are overwhelmed with the work they must monitor, which includes not only medical experiments but experiments involving humans in political science, the humanities, and other academic disciplines. In one instance, a minority of IRB members approved an experiment—one later found to be flawed—because the full board did not have the time to review the research. The OHRP’s draft guidelines recommend that IRBs be given more money and staff to ease their workload and that they follow an accreditation process that might help make the panels more independent.
  • Managing conflicts of interest. The OHRP’s guidelines propose that when conflicts exist the matter be referred to a conflict-of-interest officer or committee, presumably created by the research institute. In the consent forms they sign before participating in experiments, patients should also be assured that any conflicts have been noted and taken care of.
  • Uniformity in existing regulations. Because there are some 17 federal agencies that regulate such experiments—with rules that often conflict with one another—legislation in Congress would consolidate these activities within one office, presumably under the head of the OHRP.
  • Ethics classes for researchers. “Many [researchers]…don’t understand the requirements of clinical practice,” Koski says. “There’s been an assumption that this is a no-brainer. But that’s one of the lessons we learned, and it’s why we’re putting so much emphasis on education.”

The OHRP recommendations get mixed reviews. Many researchers, universities, and sponsoring companies consider them rigorous—perhaps too rigorous. “How much of a problem is there really?” asks Michael Werner, vice president for bioethics for the Biotechnology Industry Organization, which represents 1,100 biotech companies, academic institutions, and state biotechnology centers. “I don’t think we know the answer to that. There seems to be a perception that there’s a problem, for sure. But given that conflicts have always existed, I don’t know that you can lay all this at the feet of the commercial sponsors of [experiments].”

But for several watchdogs, the OHRP recommendations don’t go far enough. Critics worry that too much of what the office proposes would be voluntary rather than required. They are also concerned because the office can impose no civil or criminal penalties for failing to adhere to those guidelines that are mandatory. The OHRP suggests but would not require that IRBs be accredited. Says Shamoo: “[The OHRP] can stop or suspend the research, and that’s a big thing, but in my book, you’ve got to have civil or criminal penalties, or there are no teeth.”

Critics also say that IRBs need more disinterested parties—that as many as half of those on the board should be people with no ties to the research institution or to the industry financing the experiment. “The office seems to be bending backwards and sideways to continue under this delusion that self-regulation has a chance,” says Vera Hassner Sharav, a spokeswoman for the Alliance for Human Research Protections, which closely monitors conflicts of interest in research. “This is a fraternity, and the fraternity is very clearly interested in promoting the interests of the research community and the [drug-making] industry.”

Skeptics also note that the OHRP is not recommending an outright ban on ties between researchers and the companies or products involved in their experiments. The office’s suggestion that research institutions disclose financial conflicts is vague; it does not specify a threshold at which such disclosure needs to be made. Instead, each institution can decide on its own threshold—or on none. (Those that apply for FDA licenses must report if a researcher has a $10,000 interest or 5-percent ownership in the company or product involved in the experiment.) “They do suggest that if an institution has a financial stake in a product being evaluated, that the research be conducted elsewhere,” Sharpe says. “But that’s not a very strong statement. What it could say is that no institution with a [financial conflict] above $10,000 can oversee human trials.”

Shamoo agrees. “Even though they recognize conflicts of interest, they really leave it to the [research] institution to manage it. So this managing is going to be done by people who may themselves have conflicts, and that is troublesome.”

While the OHRP attempts to protect patients by issuing stronger guidelines for the disclosure of conflicts of interest, those guidelines are not clear about how or when such revelations should be made.

Koski and others say it can be difficult to explain to a layperson the technicalities involved in an experiment. Koski believes researchers need to focus not on winning the patient’s consent but on giving him the information he needs to make an educated decision. Koski says that patients who are ill often do not understand that in many instances the experiment is not going to make them better, and researchers sometimes do not go out of their way to tell them so. “Many people enroll in trials believing they’re receiving therapy,” Koski says. “And there’s a tendency [for researchers] to present information in a way that may emphasize the benefits and underemphasize the risk. But a trial that studies the safety of a new drug is unproven. People need to understand that.”

Sharpe believes researchers may have more insidious motives for shying away from details during the consent process. It could slow down the trial, for one thing, and it could persuade some patients not to participate. “It’s a huge mistake, and one that’s been made repeatedly, to take a paternalistic approach that assumes that because comprehension might be difficult, that there is not justification for disclosure,” Sharpe says. “It’s true that many patients misunderstand the nature of the research being offered, but that’s no reason to assume that informing them is unnecessary.”

Hassner Sharav says that patients themselves need to question researchers more aggressively: “We recommend to people to ask flat-out: ‘Is my doctor getting a referral fee for me to enter a trial? And if so, how much?’ It’s an okay question, but people get very nervous about it.”

Sharav and others look to Congress to fill in the holes left by the OHRP guidelines. Sen. Edward Kennedy, a Massachusetts Democrat, may propose legislation that would incorporate some of the suggestions made by critics; it might even make IRB accreditation mandatory. In the House, Rep. Diana DeGette, a Democrat from Colorado, has introduced a bill applying new regulations to all research on humans, not just experiments that get federal money or FDA licensing. She would also give the OHRP the power to impose civil or monetary penalties.

How the Bush administration would respond to such legislation is unclear, though Sharpe and others expect a fight. “We all understand the politics behind this—that a conservative government does not want to infringe on the way corporations do their business,” Sharpe says, “and that [pharmaceutical] companies with vested financial interests want to make sure that human research goes forward without a lot of obstacles, because all of their product development depends on human trials being performed.”

Fundamental Flaws

Finally, there is the matter of Koski himself. During his tenure—Koski left the job November 30 to spend more time with his family—critics were concerned that the Harvard-trained physician—a cardiac anesthesiologist who once led Massachusetts General Hospital’s Institutional Review Board—was too close to those he’s supposed to police.

“Most professors from Harvard go back to Harvard when they finish this tour, and that presents a conflict,” Hassner Sharav says. “One needs…somebody who does not have a connection to the major research centers that they are supposed to be overseeing.”

Koski, who is returning to his work at Massachusetts General Hospital, is well aware of the criticism. “There are some who believe my appointment was intended to defang the regulatory effort, but I think I made it very clear we were committed to ensuring human research was going to be done right.”

Koski ended up at the helm of the OHRP in a roundabout way: When his son, Jerod, was just two, Koski found him lying at the bottom of a swimming pool while the family was vacationing at Lake Placid. For 14 minutes, while his father performed CPR on him, Jerod remained technically dead. When Koski finally revived his son, there was—miraculously—no brain damage. After his son’s brush with death, Koski reexamined his professional priorities and resolved to use his talents and his training for something more important than professional advancement. Today, he has few illusions about the challenges the OHRP faces.

“Fundamentally, human research is ethically flawed,” Koski says. “It’s an affront to human dignity to use another person to [achieve] another end, but that’s exactly what we do. We use an individual so the rest of society can benefit—not to mention there will be shareholders who benefit and academic careers that benefit.”

Koski acknowledges that with or without reform there will be risks. But he insists that those risks be minimized—and publicly acknowledged. “Society is far more accepting of a tragedy when it knows everybody did everything right.”

Author

  • Dana Wilkie

    At the time this article was published, Dana Wilkie was a Washington correspondent covering national politics and government for Copley News Service.

tagged as:

Join the Conversation

in our Telegram Chat

Or find us on
Item added to cart.
0 items - $0.00

Orthodox. Faithful. Free.

Signup to receive new Crisis articles daily

Email subscribe stack
Share to...