Joseph J. Amon
I want to relate your point, that the randomized controlled trial (RCT) as the only evidence that is counted as legitimate, to shifts in responses to HIV/AIDS over the years. The initial emphasis was on descriptive studies, just convincing people that HIV was real, that it was being transmitted (in their country), that they needed to do something about it. There were few resources at that time, but as money increased, as knowledge grew, more complex behavioral studies using varied methodologies were conducted. But increasingly, the actors changed. More economists came in, emphasizing RCT for behavioral interventions, but not always knowing the behavioral literature. There was much more emphasis on treatment, and a loss in understanding of the populations most affected. In 2005, I attended the annual meeting of the Global AIDS Program at the Centers for Disease Control (CDC) where I was then working. When the prevention panel came up, there were no social scientists on it, just physician/epidemiologists. The entire panel agreed that they simply didn’t know what worked for prevention because there was no evidence from RCTs. They didn’t seem to have any awareness of all of the other forms of knowledge around HIV prevention that had accumulated over the past twenty years. What was striking was that following the prevention panel was a panel on treatment. They all agreed that the best approach to treatment in resource constrained settings was not always clear, but – instead of waiting for RCTs – that it was imperative to act, to implement treatment and to learn lessons through implementation. In this case, proponents of biomedical interventions emphasize broad forms of knowledge while those in charge of behavioral interventions insist upon RCTs.
Michael A. Whyte
I work on the anthropology of food and I am reminded of Michael Pollan’s book In Defense of Food, a critique of nutritionalism, in which he lays out an idea similar to what Vincanne is describing: what is measured is what can most conveniently be measured rather than what is really out there. In nutritionalism, for example, what is measured is presented as what people actually consume or what is actually in, say, a real potato. In fact, what is measured is what can be extracted or abstracted chemically from the diet or the potato using a set of tests. The list of ingredients depends on the test used. This is not reality. I have worked with veterinarians on poultry in the Third World. It was very interesting to watch the direct ways in which they communicated with people about these animals. They really listened to them; not unlike anthropologists, but very much unlike RCT trials. The problem was that most vets had no ‘scientific’ framework for using all the wonderful social and cultural information their field practice evoked… This is something to reckon with: the kinds of things we do with people can be a gold standard, but not reach people.
Is there anything good about evidence-based medicine (EBM) in public health? Yes, we can see it as a market-controlled, neoliberal strategy for evaluating the effectiveness of interventions that ignores the social dynamics of health, but are there any positives? A lot of people are excited with EBM as an important check on the profit motives in medicine. Also, are the anthropological forms of producing evidence changing given what is happening in global health? What is the anthropologist’s role in all of this?
What about the quality of the data? There is a lot of fabrication of data and ghost-writing in pharmaceutical evidence-making. In public health, goals precede data, and I’m wondering about the politics involved when people are forced to provide public health data under this rubric?
What Vincanne is telling us seems to go against what the anthropology of development has been emphasizing for a long time—process rather than outcome. On which side is public health? On the science side (epidemiology, economics) or on the development side (policies, change)? I also have a question about levels of discourse. There is a kind of double discourse going on. One for funders and the general public (“We can measure it, we can understand it”), and one for discussing things among peers (“We know it doesn’t work, but we have to do this.”) Among peers, at least, it seems that there is some honesty about the value of RCTs.
I agree that there are probably different discourses on the value of RCTs operating in public arenas and among professionals, but there has been a real shift toward privileging the RCT with the arrival of the economists. The entrance of evidence-based medicine into global health in the mid-1990s was meant to rectify the problem of non-standard care-giving and to manage costs. Public sector investments are more and more tethered to the ideas of market utility. My concern is that all forms of non-RCT knowledge run the risk of being discredited.
I know that there are critiques of modes of data production within public health, but by and large they are not about what is being erased, but about how to do RCTs better. In my view, local knowledge needs to take precedence, even when it is construed (as in Chinese medicine) as non-theoretical. This knowledge is extremely efficacious but because it is not theorized in certain ways it seems less generalizable. We have to attend to how notions of efficacy migrate with different methodologies.
Evidence-based medicine emerged from the problem of not knowing how to figure out whether a practice was working or not and also for insurance providers to limit payments. It is a double-edged sword. It might do some good things, but it diverts attention from other things.
Also, I think the idea of process as outcome in development is an important one, but we’re still faced with the problem of accountability.
Continuing to explore the idea of the counter-model, it is interesting that some economists doing RCTs, like Michael Kremer and Edward Miguel, ended up finding that interpersonal relationships are crucial for the implementation of health policy. Kremer and Miguel initially found that treating Kenyan schoolchildren with extremely cheap deworming medication increased their school attendance by some 10 percent. And this RCT was heralded as a landmark. With just a bit of cheap medication, poor countries could increase school attendance by leaps and bounds. But when the scholars returned to the field and followed a group of families outside the original study they found something unexpected. Those families who were friendly with families in the initial treatment group were less likely to treat their children than those who were friendly with families in the control group. They were also less likely to deem the medication effective at improving health. The question of how to learn to bring local communities into the very design and implementation of feasible rather technology-enamored interventions remains a huge challenge.
Also, business scholars like Michael Porter, who is working with Jim Kim and Paul Farmer on pathways for comprehensive health care delivery for the world’s poorest, do not want to be held hostage to elusive objectivity of the RCT and, rather, wants to highlight what might possibly work right now. According to Porter, by attending to what is working on-the-ground we are compelled to look to create alternative systems of evidence-gathering that contemplate the value that interventions have for real people and health systems over time.
Michael M. J. Fischer
We need to provide strong grounds for ethnographic approaches. This is the crisis of anthropology at the moment. What we do is discredited by field after field. We can start by identifying key events when ethnographic information was crucial to changing the outcome– for example, the case of thalidomide in the U.S. and Turkey.
We also have to restore the specific political history of the use of RCT in specific countries. In Chile, the idea of “community” itself was seen as ideological. In order to integrate mental health into primary health care, RCT’s were needed because community discourses were seen as left wing. How do we make a science of the concrete here? On the one hand, we can talk about ethnographic anecdotes, but how do we count and account for these political intangibles? There are ways of counting that are not statistical, but that also show intensities.
I am struck by this rubric of efficacy which is larger than just the market—convincing the funders requires an ethical call. Is there an ethical efficacy that the market efficacy relies on?