Finding My Way to Environmentalism

One of my earlier memories, about science or politics, was enthusiastically explaining to my dad that while we should stop using CFCs, we should also fix the ozone hole by sending up some high altitude planes and re-seed the ozone layer where it’s thin. I even drew pictures. Obviously, I was too young and ignorant to understand how hard that would have been. Thankfully, today it’s not nearly the problem it was due to a concerted international effort to restrict use of ozone-destroying chemicals. But I can honestly say I’ve thought of myself as some kind of an environmentalist for most of my life.

As I grew up, I became disillusioned with much of the modern environmental movement. Keith Kloor’s recent piece in Slate about the eco-pragmatists touches on a lot of why as an adult I found environmentalism so hard to swallow. Too often1 it seemed like environmentalism was rejecting the benefits of modern civilization. The answer to all environmental problems was for humans to leave nature alone, restrict our activities to use much less, and often even eschewing technology. But, I tended to argue frustratedly to my friends, human beings are nature. If a beaver dam is natural, then so is a human dam. There seemed to be no place for me.

For many years, my only real environmental act was occasionally supporting a politician or position that seemed to be reasonable (and I do mean occasional). Worse, if you search carefully you can find some very embarrassing things I’ve written. I can only plead youthful arrogance and a healthy dose of (self-maintained) ignorance2. I like to think I’m at least a little smarter now (and a bit less ignorant). I’ve realized I only have so much time to make some kind of difference in the world. The world is better than it’s ever been for human beings, but I should be helping make it better. A few years ago, I started donating money to organizations I think will make the world better. Recently I even took the Life You Can Save pledge. But one organization I started supporting was specifically environmental — I joined the Sierra Club because I thought they were effective politically.

But in September, I had a bit of frustration with them when I was surprisingly kicked off a mailing list. It showed the dogmatic side of some environmentalists. A fairly senior leader in the club tried to mediate the conflict (unsurprisingly my removal was against club policy). He was very good and positive about trying to find a solution, or at least help us all see other sides. But it was clear that no one was interested in re-considering the organization’s position on biotech. The senior leader (honestly going to bat for me as much as he could) clearly felt his energies were better spent elsewhere. That’s fine, of course. We must all chose our battles. But it was clear to me in private conversation with the biotech committee that there was little point in me trying to change policy there. He did ask me pointedly several times: what did I want to accomplish?

That’s a really good question. What do I want to accomplish? I don’t know everything I want. Obviously I want to help reduce our harmful impacts on the environment. I want us to actually do more about climate change. But those aren’t things any one can accomplish. But there is one thing I can (maybe, arrogantly) help with. I want the conversation to be better.

I want our conversations about environmental questions to be science- and evidence-based because I believe that will let us make the best decisions. Obviously our diverse values must be part of the conversation but real information is critical. Exaggerated or mis-reported facts cloud the discussion. They make partisans more partisan and de-value all evidence. The complexity of what we know (and don’t know) needs to be taken into account. We need to see why the other side sees the way they do. I want to see less scare-mongering3 and more respect for just how far we’ve come — cities are no longer as smoggy as they are, water quality is better, we’re getting better at reducing the impacts of our existence on other life. Most importantly, scaring people doesn’t make them act.

We need better conversations if we’re going to get anywhere. I try not to laugh when someone makes a particularly (to me) specious argument for some left-wing sacred cow. I need to ask myself why do they make that argument? What are they valuing? How can I help us talk about the real problems? Can we meet somewhere in the middle? I see Kloor’s eco-pragmatists as people who are generally trying to talk about diverse values and the complexity of evidence, rather than historical alignments. I like to think most of us want to be that way, even if day to day we easily align into tribal position-taking. That gentleman who tried to mediate between me and the biotech committee at the Sierra Club was really trying to get us to see that we can disagree and still get something done.

I think I will re-join the Sierra Club. We share some values and they do get some stuff done I want done. But I think the Nature Conservancy might get more of my money and time because they seem to be leading on putting more evidence into the conversation.

  1. Note this is partially the way the media portrays any conflict and not reflective of people’s actual beliefs.
  2. I will not link to them. Suffice to say if you find something and want to know my current position: please ask and don’t assume. Everyone is allowed mistakes no matter their age and everyone can think and do better.
  3. Scary stories are rarely as evidence-based as they should be.

Your Citation Does Not Say What You Think It Does

Recently, Dr. Oz said some things that certain left-leaning folks have interpreted as condemning organic agriculture. Tom Philpott at Mother Jones wrote a piece about how Dr. Oz got it wrong citing various recent studies about pesticides. I thought most weren’t very convincing citations and he neglected to cite anything that pointed out most conventional food has fairly small or undetectable pesticide residues. But one study he cited completely contradicts the point he was trying to make.

Normally I would just comment on his post but due to how busy I’ve been this week, it’s a few days later and it won’t actually be read (well it won’t be read much here either).

Updated 2012-12-09: Well this is embarassing. Somehow I neglected to actaully link to Philpott’s piece. This is now fixed.

The second study he cites for evidence that organic food helps one avoid pesticide exposures is “Neurobehavioral problems following low-level exposure to organophosphate pesticides: a systematic and meta-analytic review” (sadly pay-walled) about which he writes:

For a paper released in November, UK researchers conducted a “meta-analysis” on the neurological effects of organophosphate pesticides at low levels—that is, they gathered all of the well-designed studies on the topic they could find and analyzed the combined results. They found a “significant association between low-level exposure to OPs [organophosphates] and impaired neurobehavioral function.” Specifically, they found that exposure to the pesticides reduced people’s memory and their ability to process information quickly. Organophosphates have been “largely withdrawn from use” in the last decade, EWG reports, but the Environmental Protection Agency has not seen fit to ban them, and they are still sprayed on some crops. According to EWG’s latest analysis of USDA data, they still turn up in bell peppers, green beans, kale, and collards—again, all foods that authorities like Oz rightly encourage people to eat more of. EWG recommends buying these foods organic if possible.

Philpott’s text, including his referencing the EWG and detection of OP residues on certain foods, implies that this study is relevant to dietary organophosphate exposure levels. If you got the impression that eating conventionally grown kale or collards might mean you’re going to get neurological problems, then I wouldn’t be surprised. However, when I loaded the study, I found that the first sentence of the abstract is: “Meta-analysis was carried out to determine the neurotoxic effects of long-term exposure to low levels of organophosphates (OPs) in occupational settings.” Occupational exposures are rarely relevant to non-occupational conditions so I was a bit skeptical that Philpott’s implication would hold up.

A nice internet person helped me look at the study and unsurprisingly the studies included are ones from workers on farms applying pesticides, pesticide factory production workers, pest control workers, sheep dippers and so forth. In other words, people who work with pesticides either every day or regularly as part of their job which exposes them to much greater amounts than anyone eating food bought from Safeway. Further, this meta-study explicitly excluded studies that didn’t have an control group that wasn’t similarly exposed. One of the exclusion criteria reads: “Animal studies, studies of children, studies of human adults which did not include an unexposed control group, single case reports”. That is, if a study looked at neurological effects in farm workers who apply pesticides, the study would be excluded if it didn’t include a comparison to a similar (probably non-farm worker) control group.

This review did find that most of the included studies supported its title. Occupational exposure to organophosophate pesticides does appear to be linked to neuro-behavioral problems. But, since each of those studies had to have an appropriate non-occupationally exposed control group that didn’t show the neurological effects, that means the non-occupationally exposed people weren’t showing similar effects. Otherwise, they wouldn’t be able to find a difference. Duh. So unless you’re a sheep dipper or a farm worker or one of the other groups studied here, this study just doesn’t apply to you. Your kale is not going to give you mental problems.

To summarize: using this study to justify the idea that you should avoid certain conventionally grown vegetables because the organophosphate residues might cause neurological problems is wrong. It’s a good study to cite for why it’s important to heavily and closely regulate pesticide use to protect workers (including maybe banning some!). However, that’s not what Philpott wrote despite mentioning the issue of workers’ rights elsewhere. This study says nothing whatsoever about dietary intake of pesticides. This citation just doesn’t say what the author is claiming. Period.

Peer-Reviewed Journal Articles Aren’t Gospel

Science is never “done”. We will never have all the answers tied up in a bow with no chance of there being changes later. There are theories that are pretty settled: we aren’t all going to start falling away from the ground unless something big happens (you know, like a huge, Jupiter-sized planet colliding with us). The fundamentals of evolution aren’t going to be contradicted, only refined. But to discover new knowledge a lot of modern scientific work is going to be on the edges of what we already know. Thus, it’s definitionally pretty tentative. A science paper is thus just one team’s (or author’s) view of some question. A science paper is not gospel. Just because it’s published “in a journal” doesn’t mean it’s good research or even that it’s remotely true.

But people (rightly or wrongly) put a lot of stock in claims when they are backed up with a journal citation. Unfortunately, it’s pretty easy to get anything published in some journal, somewhere. When we consider claims that are not consistent with broad scientific consensus on the topic, a good article on those claims would have to be very impressive. Scientists who successfully challenge consensus do not solely get published in journals no one has heard of (or journals that many other scientists think have low standards). Good challenges of consensus on a topic are well-written, well-supported and acknowledge the contradicting research. An article I was passed recently is a good example of an article that is not good quality and shows traits common to low quality articles. This example is related to biotech agriculture (aka “GMO crops”) but the signs can be applied to many types of journal articles.

The Example Article

The article I was sent is “A Review on Impacts of Genetically Modified Food on Human Health” (PDF) by Verma, et. al. I discussed this article on the Biofortified forum so some of this was discovered by others there. The article is ostensibly about transgenic crops and their uses and harms, but it shows strong signs of being written to support a pre-conceived position that isn’t consistent with the body of scientific research. It thus is a good example of what to look out for when someone sends you an article that “proves” something is true.

Broad, Strong Claims Using Slanted Language

First, credible science articles do not make over-the-top claims — at least not usually and generally not in strong language. When I skimmed the article, I noted a section heading with the title “GMOs ARE INHERENTLY UNSAFE”. First, researchers in biotech crops don’t often call them “GMOs” in their articles because the term has different connotations to many since nearly all human foods are “genetically modified” from wild sources. It’s also a somewhat loaded term due to activists over the last decades and some science articles I’ve read only mention the term in the keyword list. But more importantly the phrase “inherently unsafe” is just not scientific language. There are very few substances that could be labeled that way but even chemicals that strongly cause cancer — or even nuclear materials — would probably not be called “inherently unsafe” in a good article because there are still legitimate uses for them. Few things are “inherently unsafe” — or inherently safe for that manner.

Detecting Cherry-Picking

You don’t need to necessarily know a field well to spot that an article is cherry-picking the other research it is using to bolster its position. In this case, this is ostensibly a review or commentary on biotech agriculture in general. If you’re aware of the topic at all you know that there are a lot of biotech crops planted worldwide. Thus, in a section titled “GM DIETS CAUSE LIVER DAMAGE”, if it is going to fairly demonstrate the claim is true, it would need to cite many research papers that did not show this as well as the few that do and argue why the latter are more convincing than the former. Why? If GM food actually caused liver damage, we would have strong epidemiological evidence and many worried primary care doctors. There would be studies on it. And, in fact, there are numerous studies on GM foods looking for organ damage (primarily on non-human animals since they are easier to do controlled studies with) and very, very few show any problems. A balanced review of this question thus needs to cite those articles. This section doesn’t. But you can spot that the article is cherry-picking because it’s just not credible that there are only a handful of studies on this question and none of them contradict the claim. There’s almost always some paper contradicting a health or medical claim.

But sometimes when looking at an article you don’t have background cues like this to judge the likelihood of cherry-picking. I firmly believe that non-scientists should be able to look at the scientific literature themselves. You have to be careful to not fall for the trap of thinking that one article is gospel but you can go read the literature. If you don’t know a lot about a topic, a good way to get a handle on it is to find a review article in a high profile journal. They are usually written in a less specialist way and explain more terms. I would look for one in a PLOS journal (they have several open access ones that anyone can freely read) or in Science or Nature (the “prestige” general subject journals of the world) or a subset of subject-specific journals. But you’ll run into some trouble. How do you decide which journals to trust a review in? A Google Scholar search for the topic plus “review” will probably turn up a well-cited review. But unfortunately, you might not have access to it. But let’s assume you get access1, this should give you a good idea of the field but more importantly it will have lots of citations. If you think an article is cherry picking and it misses a seemingly relevant citation in this list that contradicts the article’s claim, then you might ask why2.

Major Typos, Editing Mistakes or Ungrammatical Language

This one is admittedly a bit hard. Even a good quality journal might publish something with typos or other editing mistakes. But one that has many of them or incredibly blatant ones was probably not carefully edited by the authors or the journal staff. This example has one so blatant it says the journal is of very low quality. The abstract (the introduction of a scientific article and often the only thing many people read) contains this: “?not sure what is being said here?”. Those question marks are not a typo on my part and are actually in the PDF I downloaded. This is clearly an editor’s insertion during the editing process that was left in the published document. The article itself is dated from 2011 so there’s been ample time to correct it on the journal’s website. It hasn’t. I’m left with the impression that it wasn’t carefully reviewed before publication and moreover that hardly anyone has read it after publication.

Now, this particular sign is obviously somewhat problematic. Many of us don’t really know English well enough to distinguish stilted, poor writing from actually ungrammatical writing — and a lot of scientific writing is stilted and poor even in good journals. Moreover, many scientists by necessity publish articles in English even though that isn’t their first language or even one they are fully fluent in. However, we’re talking about articles that challenge more consensus views in a field. A good one will most likely get support to improve any language issues before publication because it will be so note-worthy. In any case, it’s only a sign and I would err on the side of giving an article the benefit of the doubt for spelling and grammatical errors.

Predatory Open Access Journals

You’ll have noticed if you’ve ever tried to read published scientific research that often you just can’t get to the paper. Most journals still require subscriptions to access articles. A scientist publishing in a particular field most likely has access to the more important journals in his field (often thru their university having an institutional subscription) but you probably don’t. But journal subscriptions are expensive, even for a university, and many researchers don’t even have access to all the papers relevant to their research. Some scientists have been pushing for a model called “open access” where any article is freely available to anyone (on the internet) once published. The premier example is the PLOS journals mentioned above, but even the high-profile closed access journals Science and Nature provide a way for authors to make their paper open access. The main benefit of open access is anyone can get to the research but many of the open access journals also make an effort to publish more rapidly than some other journals (though they are still peer reviewed).

But this leads to a darker side. Open access journals currently primarily depend on publication fees paid by the authors of an accepted paper to pay for the costs of maintaining the journal. Accepted articles are still reviewed by other scientists for free but editors and staff to maintain a website does cost money. Good open access journals are transparent about their funding structure and also waive publication fees for scientists that can’t afford it (e.g. from developing nations). But the benefits of open access have been sufficiently hyped — and researchers sufficiently desperate to get published — that there are numerous predatory open access journals published by companies that aren’t necessarily prioritizing good quality science but rather accepting papers regardless of quality so they can collect publication fees.

This particular article was published in a Bentham Science journal which is part of a group that look pretty scammy to me. One problem being noted with this group is that scientists who don’t even work in the field a journal is supposed to cover are being asked to be editor. Editors at journals are responsible for filtering and curating appropriate work to be peer-reviewed, usually set policy on peer review and quality and often even have the authority to publish a paper despite peer reviewer objection. Choice of editor for a journal is a big deal and extremely important towards setting or maintaining quality. A publisher that asks inappropriate people to be editor is not a good publisher, especially if they do it repeatedly. This publisher also spams scientists repeatedly to publish in their journals. I don’t think I can trust anything coming from this group of journals (though see the addendum below for more on why this paper is particularly poor.)

These are just a few signs that an article someone sent you isn’t very good. It’s not fool proof and you would probably want to ask someone you trust for their thoughts as well. Modern social media (and heck even email!) means you can actually find someone who is well-respected to ask. You can even send a tweet to some scientists and many will answer. There are also a lot of science journalists on twitter as well: they may not be scientists but they can spot bullshit pretty well. The point here is not to nit-pick a paper to shreds, but rather to let you figure out if a strong claim someone is making based on some paper is a reasonable claim and supportable by the balance of scientific evidence. Big claims require strong evidence and a poorly written paper in a crappy journal that doesn’t acknowledge contradicting research just isn’t good evidence.

Addendum: And even more wrong ….

As a small (hopefully) addendum, I want to note that no one should cite this paper ever as evidence that GMOs are harmful. Beyond the above flaws, we discovered that:

  • The journal editor and one of the authors of this paper appear to be the same RB Singh. Editors do sometimes get their own work published in their own journals but it can be a sign that they are misusing their position to increase their own publication record.
  • The RB Singh in question appears to be a researcher who is fairly convincingly believed to have problems with their work. While it’s possible that the Ram Singh on this paper is not the same one as in this noted case in the BMJ, they seem to share the same affiliation in the same city in Utter Pradesh so I think it likely. I’m not sure I would trust a researcher with this publication record (and resistance to providing data).
  • Karl Haro von Mogel on Biofortified discovered the “GMOs ARE INHERENTLY UNSAFE” section was plagiarized from a document published by an anti-GMO activist Jeffrey Smith. I hope this is obvious, but a credible scientific article is not going plagiarize.

With all of that, I’m sad to say that this paper is cited by a publication from the Center for Food Safety. I originally learned about it from a local Democrat who replied to questions I had sent to the local group trying to get a GMO labeling law on the ballot. I’m sad to see it apparently must be commonly cited in activist groups. It’s frankly not a credible document.

Updated 2012-12-02 12:15: I fixed a few missing words and cases of unclear grammar.

  1. Ways to get a hold of an article: a local university may have a copy that you can look at; you might ask on twitter or other social media; you might even write the corresponding author directly.
  2. But be careful: sometimes the dates on articles (when they are publish and when they authors wrote it could be more than a year apart) mean that a relevant citation will be honestly not present. Plus, a scientist just might have missed relevant research. This method of detecting cherry picking needs to be balanced against that. Blatant cherry picking, as with this example, will show numerous relevant citations missing.