Cancer studies get mixed grades on redo tests

An effort to reproduce findings of five prominent cancer studies has produced a mixed bag of results.

In a series of papers published January 19 in eLife, researchers from the Reproducibility Project: Cancer Biology report that none of five prominent cancer studies they sought to duplicate were completely reproducible. Replicators could not confirm any of the findings of one study. In other cases, replicators saw results similar to the original study’s, but statistical analyses could not rule out that the findings were a fluke. Problems with mice or cells used in two experiments prevented the replicators from confirming the findings.
“Reproducibility is hard,” says Brian Nosek, executive director of the Center for Open Science in Charlottesville, Va., an organization that aims to increase the reliability of science. It’s too early to draw any conclusions about the overall dependability of cancer studies, Nosek says, but he hopes redo experiments will be “a process of uncertainty reduction” that may ultimately help researchers increase confidence in their results.

The cancer reproducibility project is a collaboration between Nosek’s center and Science Exchange, a network of labs that conduct replication experiments for a fee. Replicators working on the project selected 50 highly cited and downloaded papers in cancer biology published from 2010 to 2012. Teams then attempted to copy each study’s methods, often consulting with the original researchers for tips and materials. The five published in eLife are just the first batch. Eventually, all of the studies will be evaluated as a group to determine the factors that lead to failed replications.
Critics charge that the first batch of replication studies did not accurately copy the originals, producing skewed results. “They didn’t do any troubleshooting. That’s my main complaint,” says cancer biologist Erkki Ruoslahti of Sanford Burnham Prebys Medical Discovery Institute in La Jolla, Calif.

Ruoslahti and colleagues reported in 2010 in Science that a peptide called iRGD helps chemotherapy drugs penetrate tumors and increases the drugs’ efficacy. In the replication study, the researchers could not confirm those findings. “I felt that their experimental design was set up to make us look maximally bad,” Ruoslahti says.

Replicators aren’t out to make anyone look bad, says cancer biologist Tim Errington of the Center for Open Science. The teams published the experimental designs before they began the work and reported all of their findings. What Ruoslahti calls troubleshooting, Errington calls fishing for a particular result. Errington acknowledges that technical problems may have hampered replication efforts, but that’s valuable data to determine why independent researchers often can’t reproduce published results. Identifying weaknesses will enable scientists to design better experiments and conduct research more efficiently, he argues.

Other researchers took issue with the replicators’ statistical analyses. One study sought to reproduce results from a 2011 Science Translational Medicine report. In the original study, Atul Butte, a computational biologist at the University of California, San Francisco, and colleagues developed a computer program for predicting how existing drugs might be repurposed to treat other diseases. The program predicted that an ulcer-fighting drug called cimetidine could treat a type of lung cancer. Butte and colleagues tested the drug in mice and found that it reduced the size of lung tumors. The replication attempt got very similar results with the drug test. But after adjusting the statistical analysis to account for multiple variables, the replication study could no longer rule out a fluke result. “If they want a headline that says ‘It didn’t replicate,’ they just created one,” Butte says. Errington says the corrections were necessary and not designed to purposely invalidate the original result. And when replication researchers analyzed both the original and replication study together, the results once again appeared to be statistically sound.

A failure to replicate should not be viewed as an indication that the original finding wasn’t correct, says Oswald Steward, a neuroscientist at the University of California, Irvine, who has conducted replication studies of prominent neuroscience papers but was not involved in the cancer replication studies. “A failure to replicate is simply a call to attention,” Steward says. Especially when scientists are building a research program or trying to create new therapies, it is necessary to make sure that the original findings are rock solid, he says. “We scientists have to really own this problem.”

Editor’s note: This story was updated January 26, 2017, to correct the starting point of the x-axis in the first graph.

Climate change may boost toxic mercury levels in sea life

The muddying of coastal waters by climate change could drastically increase levels of neurotoxic mercury in sea life, contaminating food supplies.

Shifting rainfall patterns may send 10 to 40 percent more water filled with dissolved bits of organic debris into many coastal areas by 2100. The material can cloud the water, disrupting marine ecosystems by shifting the balance of microbes at the base of the food web, new laboratory experiments suggest. That disruption can at least double methylmercury concentrations in microscopic grazers called zooplankton, researchers report January 27 in Science Advances.
The extra mercury could reverberate up the food web to fish that humans eat, warns study coauthor Erik Björn, a biogeochemist at Umeå University in Sweden. Even small amounts of methylmercury, a form of the metal easily absorbed by humans and other animals, can cause birth defects and kidney damage, he notes.

Pollution from human activities such as fossil fuel burning has already tripled the amount of mercury that has settled in the surface ocean since the start of the Industrial Revolution (SN: 9/20/14, p. 17). Climate changes spurred by those same activities are washing more dark organic matter into the oceans by, for instance, boosting wintertime rainfall in some regions.

Björn and colleagues replicated this increased runoff using 5-meter-tall vats filled with marine microbes and dashes of methylmercury. Vats darkened by extra organic matter showed an ecosystem shift from light-loving phytoplankton to dark-dwelling bacteria that eat the extra material, the researchers found.

Zooplankton nosh on phytoplankton, but they don’t directly eat the bacteria. Instead the bacteria are consumed by protozoa, which zooplankton then hunt. Methylmercury accumulates with each step up the food web. So the addition of the protozoa middle step, the researchers report, resulted in zooplankton methylmercury levels two to seven times higher than in vats without the extra organic matter. Methylmercury levels will continue to increase up the food web to fish and the humans who eat them, the researchers warn.

The results suggest that curbing mercury contamination is more complicated than simply controlling emissions, says Alexandre Poulain, an environmental microbiologist at the University of Ottawa. “First we need to control emissions, but we also need to account for climate change.”

Faint, distant galaxies may have driven early universe makeover

Two cosmic magnifying glasses are giving astronomers a glimpse of some extremely faint galaxies that existed as far back as 600 million years after the Big Bang (13.8 billion years ago). Such views suggest that tiny galaxies in the early universe played a crucial role in cosmic reionization — when ultraviolet radiation stripped electrons from hydrogen atoms in the cosmos.

“That we detected galaxies as faint as we did supports the idea that a lot of little galaxies reionized the early universe and that these galaxies may have played a bigger role in reionization than we thought,” says Rachael Livermore, an astronomer at the University of Texas at Austin. She and colleagues report the results in the Feb. 1 Astrophysical Journal.
The team identified the dim galaxies in images taken with the Hubble Space Telescope while it was pointed at two closer clusters of galaxies. Those clusters act as a gravitational lens, brightening and magnifying the light of fainter objects much farther away. Subtracting the clusters’ light revealed distant galaxies up to one-tenth as bright as those spotted in previous studies (SN Online: 11/4/15).

Finding such faint galaxies implies that stars can form in much smaller galaxies than models have predicted and that there were enough of these small galaxies to drive reionization almost entirely by themselves. Reionization radically refashioned the universe so that charged atoms instead of neutral ones pervaded space. Understanding that transition may help astronomers explain how stars and galaxies arose in the early universe.

“Such measurements are really challenging to make,” says Brant Robertson, an astronomer at the University of California, Santa Cruz, who was not involved with the study. “They’re really at the forefront of this field, so there are some questions about the techniques the team used to detect these galaxies and determine how bright they actually are.”

A team of astronomers led by Rychard Bouwens of Leiden University in the Netherlands argues in a paper submitted to the Astrophysical Journal and posted October 2 online at arXiv.org that Livermore and colleagues haven’t, in fact, detected galaxies quite as faint as they have claimed. That keeps the door open for other objects, such as black holes accreting matter and spitting out bright light, to have played a part in reionization.

Robertson says the disagreements motivate further work, noting that Livermore and colleagues used a clever approach to spot what appear to be superfaint galaxies in the early universe. Now, the teams will have to see if that technique stands the test of time.

Livermore and colleagues plan to use the technique to search for faint galaxies lensed by other clusters Hubble has observed. Both teams, along with Robertson, are also looking to the October 2018 launch of the James Webb Space Telescope, which should be able to spot even fainter and more distant galaxies, to determine what drove reionization in the early universe.

See how long Zika lasts in semen and other bodily fluids

Traces of Zika virus typically linger in semen no longer than three months after symptoms show up, a new study on the virus’ staying power in bodily fluids reveals.

Medical epidemiologist Gabriela Paz-Bailey of the U.S. Centers for Disease Control and Prevention and colleagues analyzed the bodily fluids — including blood, urine and saliva — of 150 people infected with Zika. In 95 percent of participants, Zika RNA was no longer detectable in urine after 39 days, and in blood after 54 days, researchers report February 14 in the New England Journal of Medicine. (People infected with dengue virus, in contrast, typically clear virus from the blood within 10 days, the authors note.)

Although the CDC recommends that men exposed to Zika wait at least six months before having sex without condoms, researchers found that, for most men in the study, Zika RNA disappeared from semen by 81 days.

Few people had traces of RNA in the saliva or in vaginal secretions. Most Zika infections transmitted sexually have been from men to women, but scientists have reported at least one female-to-male case.