+44(0) 1234 567 890 info@domainname.com

Sunday, 25 January 2015

Science of Stupid Episode 10: LEARN SCIENCE WITH FUN AND SMILE!

Science of Stupid Episode 10

              LEARN SCIENCE WITH FUN AND SMILE!

FOR PAKISTANI ( CLICK HERE TO WATCH )


LEARN SCIENCE WITH FUN! 

    SEE WHAT SOME STUPIDS DID AND LEARN THE SCIENCE       BEHIND IT! 


   THIS IS THE BEST WAY TO MAKE SOMEONE UNDERSTAND             SCIENCE! 


No comments

Science of Stupid Episode 9: LEARN SCIENCE WITH FUN AND SMILE!

Science of Stupid Episode 9

              LEARN SCIENCE WITH FUN AND SMILE!

FOR PAKISTANI ( CLICK HERE TO WATCH )



LEARN SCIENCE WITH FUN! 

    SEE WHAT SOME STUPIDS DID AND LEARN THE SCIENCE       BEHIND IT! 


   THIS IS THE BEST WAY TO MAKE SOMEONE UNDERSTAND             SCIENCE! 

No comments

Science of Stupid Episode 8: LEARN SCIENCE WITH FUN AND SMILE!

Science of Stupid Episode 8

              LEARN SCIENCE WITH FUN AND SMILE!

FOR PAKISTANI ( CLICK HERE TO WATCH )


LEARN SCIENCE WITH FUN! 

    SEE WHAT SOME STUPIDS DID AND LEARN THE SCIENCE       BEHIND IT! 


   THIS IS THE BEST WAY TO MAKE SOMEONE UNDERSTAND             SCIENCE! 



No comments

Science of Stupid Episode 7: LEARN SCIENCE WITH FUN AND SMILE!

Science of Stupid Episode 7

              LEARN SCIENCE WITH FUN AND SMILE!

FOR PAKISTANI ( CLICK HERE TO WATCH )

LEARN SCIENCE WITH FUN! 

    SEE WHAT SOME STUPIDS DID AND LEARN THE SCIENCE       BEHIND IT! 


   THIS IS THE BEST WAY TO MAKE SOMEONE UNDERSTAND             SCIENCE! 


No comments

Science of Stupid Episode 6: LEARN SCIENCE WITH FUN AND SMILE!

Science of Stupid Episode 6

              LEARN SCIENCE WITH FUN AND SMILE!

FOR PAKISTANI ( CLICK HERE TO WATCH )

LEARN SCIENCE WITH FUN! 

    SEE WHAT SOME STUPIDS DID AND LEARN THE SCIENCE       BEHIND IT! 


   THIS IS THE BEST WAY TO MAKE SOMEONE UNDERSTAND             SCIENCE! 



No comments

Science of Stupid Episode 5: LEARN SCIENCE WITH FUN AND SMILE!

Science of Stupid Episode 5

              LEARN SCIENCE WITH FUN AND SMILE!

FOR PAKISTANI ( CLICK HERE TO WATCH )



LEARN SCIENCE WITH FUN! 

    SEE WHAT SOME STUPIDS DID AND LEARN THE SCIENCE       BEHIND IT! 


   THIS IS THE BEST WAY TO MAKE SOMEONE UNDERSTAND             SCIENCE! 


No comments

Science of Stupid Episode 4: LEARN SCIENCE WITH FUN AND SMILE!

Science of Stupid Episode 4

              LEARN SCIENCE WITH FUN AND SMILE!

FOR PAKISTANI ( CLICK HERE TO WATCH )


LEARN SCIENCE WITH FUN! 

    SEE WHAT SOME STUPIDS DID AND LEARN THE SCIENCE       BEHIND IT! 


   THIS IS THE BEST WAY TO MAKE SOMEONE UNDERSTAND             SCIENCE! 


No comments

Science of Stupid Episode 3: LEARN SCIENCE WITH FUN AND SMILE!

Science of Stupid Episode 3

              LEARN SCIENCE WITH FUN AND SMILE!

FOR PAKISTANI ( CLICK HERE TO WATCH )



   LEARN SCIENCE WITH FUN! 

    SEE WHAT SOME STUPIDS DID AND LEARN THE SCIENCE       BEHIND IT! 

   THIS IS THE BEST WAY TO MAKE SOMEONE UNDERSTAND             SCIENCE! 


No comments

Science of Stupid Episode 2: LEARN SCIENCE WITH FUN AND SMILE!

Science of Stupid Episode 2

              LEARN SCIENCE WITH FUN AND SMILE!

FOR PAKISTANI ( CLICK HERE TO WATCH )



                                  LEARN SCIENCE WITH FUN! 

    SEE WHAT SOME STUPIDS DID AND LEARN THE SCIENCE                                                     BEHIND IT! 

   THIS IS THE BEST WAY TO MAKE SOMEONE UNDERSTAND                                                   SCIENCE! 


No comments

Science of Stupid Episode 1: LEARN SCIENCE WITH FUN AND SMILE!


                 Science of Stupid                                                Episode 1

              LEARN SCIENCE WITH FUN AND SMILE!

FOR PAKISTANI ( CLICK HERE TO WATCH )


LEARN SCIENCE WITH FUN! SEE WHAT SOME STUPIDS DID AND LEARN THE SCIENCE BEHIND IT! THIS IS THE BEST WAY TO MAKE SOMEONE UNDERSTAND SCIENCE! 

                 

No comments

Saturday, 24 January 2015

Blood vessels in older brains break down, possibly leading to Alzheimer's

University of Southern California (USC) neuroscientists may have unlocked another puzzle to preventing risks that can lead to Alzheimer's disease. Researchers at Keck Medicine of USC used high-resolution imaging of the living human brain to show for the first time that the brain's protective blood barrier becomes leaky with age, starting at the hippocampus, a critical learning and memory center that is damaged by Alzheimer's disease.
The study indicates it may be possible to use brain scans to detect changes in blood vessels in the hippocampus before they cause irreversible damage leading to dementia in neurological disorders characterized by progressive loss of memory, cognition and learning. These findings would have broad implications on conditions that will affect 16 million Americans over age 65 by 2050, according to the latest figures from the Alzheimer's Association. The research appears in the Jan. 21, 2015, edition of the peer-reviewed scientific journal Neuron.
"This is a significant step in understanding how the vascular system affects the health of our brains," said Berislav V. Zlokovic, M.D., Ph.D., director of the Zilkha Neurogenetic Institute (ZNI) at the Keck School of Medicine, the Mary Hayley and Selim Zilkha Chair for Alzheimer's Disease Research and the study's principal investigator. "To prevent dementias including Alzheimer's, we may need to come up with ways to reseal the blood-brain barrier and prevent the brain from being flooded with toxic chemicals in the blood. Pericytes are the gate-keepers of the blood-brain barrier and may be an important target for prevention of dementia."
Alzheimer's disease is the most common type of dementia, a general term for loss of memory and other mental abilities. According to the Alzheimer's Association, roughly 5.2 million people of all ages in the United States today have Alzheimer's disease, an irreversible, progressive brain disease that causes problems with memory, thinking and behavior. Post-mortem studies of brains with Alzheimer's disease show damage to the blood-brain barrier, a cellular layer that regulates entry of blood and pathogens into the brain. The reasons why and when this damage occurs, however, remain unclear.
In the Neuron study, Zlokovic's research team examined contrast-enhanced brain images from 64 human subjects of various ages and found that early vascular leakage in the normally aging human brain occurs in the hippocampus, which normally shows the highest barrier properties compared to other brain regions. The blood-brain barrier also showed more damage in the hippocampal area among people with dementia than those without dementia, when controlling for age.
To validate the research method, the USC team examined brain scans of young people with multiple sclerosis without cognitive impairment, finding no difference in barrier integrity in the hippocampus between those of the same age with and without the disease. The researchers also looked at the subjects' cerebrospinal fluid (CSF), which flows through the brain and spinal cord. Individuals who showed signs of mild dementia had 30 percent more albumin, a blood protein, in their CSF than age-matched controls, further indicating a leaky blood-brain barrier. The CSF of individuals with dementia also showed a 115 percent increase of a protein related to pericyte injury. Pericytes are cells that surround blood vessels and help maintain the blood brain barrier; previous research has linked pericytes to dementia and aging
.
No comments

Pumping carbon dioxide deep underground may trigger earthquakes

A corn processing plant in Illinois

The shaking in the nation’s midsection has been intense enough in the last few years to break chimneys and scatter dishes. Those alarming earthquakes are in places where such things have been about as common (and as welcome) as laughing hyenas. Their cause: injection of watery waste fluids deep underground as part of natural gas and oil retrieval.
This worries some scientists who have high hopes for a way to curb global warming by getting rid of carbon dioxide that comes from, among other things, combustion of coal, gas and oil. These CO2 emissions may be accelerating Earth toward a climate calamity as the land and seas warm and weather zones shift. One promising strategy for curbing climate change is to pump much of the CO2 from fossil fuel-fired power plants into deep underground storage where everybody hopes it will remain for millennia.
But in an ironic symmetry, in which a proposed solution to a problem shares one of its side effects, deep geological storage of CO2 might produce as many or more quakes than are now being triggered by oil- and gas-related wastewater disposal. Especially if it is performed on the vast scale many hope to see.
To study the basic mechanism involved, scientists are delving deeply into the stresses and strains that have built up over the ages in the Earth’s crust. What they have found is that it is remarkably easy to trigger earthquakes, even in regions that historically have been seismically silent or nearly so.
“We have faults that are accumulating stress over thousands to hundreds of thousands of years, even in Iowa,” says Stanford University geophysicist Mark Zoback. “So when you inject water or gas or any fluid it can set some of them off.”
In the central and eastern United States from 1970 to 2000, geologists recorded a yearly average of only 20 quakes of at least magnitude 3.0 — enough to sway a hanging lamp. Then the count rose: Between 2010 and 2013, 450 such temblors hit — a rate about five times higher than normal for those parts of the country.
Story continues below graph

“There have been hundreds of earthquakes in Oklahoma alone. That state is making magnitude 3s faster than we are in California,” says a leading expert on earthquake mechanics, William Ellsworth, in his office at the U.S. Geological Survey’s regional center in Menlo Park, a few kilometers east of the mighty San Andreas Fault. “It is unprecedented.”
If asked to explain the quake upsurge, many Americans may guess fracking. Not quite, but fracking shares a family resemblance to the prime culprit. Formally called hydraulic fracturing, fracking has allowed drillers to make money off sandstones and shales that had been considered too “tight” for the gas and oil to flow freely into wells (SN: 9/8/12, p. 20). Fracking is an old process that has only recently entered wide use. It drives networks of fissures into this type of formation, one patch at a time. It hammers brief pulses of very high-pressure water or other fluids plus grit and chemicals into the shale so that oil and gas can migrate more easily. Fracking is not seismically silent, but its quivers, mostly hovering around or below magnitude 2.0, are imperceptible at the surface.
The immediate reason for the nerve-rattling quakes of magnitude 4.0 or 5.0: New oil and gas fields have gone into production, often after fracking gets them started. Flowing up through the new wells is more than gas and oil. “Flowback” of fracking fluid often comes up too. Plus, a barrel of crude may reach the surface mixed with more than a barrel of additional undrinkable, very salty water that has accumulated down deep over eons.
Industry’s response for many decades has been to gather the foul liquid from many extraction wells and deliver it to a relatively few high-volume wastewater injection sites: More than 1 million wells nationwide send their wastewater to about 30,000 disposal sites. It is typically pumped nonstop and driven at high pressure into deep aquifers a mile or more down to mix with the saline waters already there. In most cases, the water injection works without incident. But at times, “things have gone off the rails,” Ellsworth says (see “When the Earth moves," below).
For all the huffing and chuffing of heavy equipment for months on end, the energy in these human-made earthquakes does not come from the work being done to shove fluids far underground, Ellsworth says. The potential energy for earthquakes is already down there.
In recent decades geologists have come to suspect that North America’s basement rocks are near the breaking point. Faults, many inactive for thousands to millions of years, lace much of the bedrock and strata all around the globe. In places where such deep, long-immobile faults exist, “it does not take much to set them off,” Ellsworth says.
The zone of unnaturally high pressurization extends well beyond the plume of newly pumped-in fluid. If this high pressure encounters a fault that’s close to failing, even in a layer of rock that is largely impervious to fluid entry, fluid already in the nearby formations may worm its way into the fault zone of crushed rock. It tends to unclamp the fault, lowering its resistance to slippage, perhaps enough for it to yield to the stress it had long withstood.
That is roughly how geologists explain the rise in oil and gas field quakes. They are examples of what is formally called induced seismicity, or quakes triggered by human activity.
Story continues below illustration

Injection site geology

Deep storage test

But residents of Decatur, Ill., are not feeling any perceptible quakes. Decatur, a busy industrial town among the rolling hills and cornfields 180 miles southwest of Chicago, is home to the nation’s biggest test of deep carbon storage. Decatur is considered a safe test site in part because no significant fault is anywhere near it.
The North American headquarters and processing plant of the giant agribusiness company Archer Daniels Midland, or ADM, dominates Decatur’s skyline. Trainloads of corn arrive there to become animal feed, cooking oil, corn syrup, sweeteners and more.
The possible escape hatch from global warming under test in Decatur is found in a row of outdoor fermentation tanks, each more than 15 meters high. They turn 3.3 million metric tons (130 million bushels) of ADM corn into 1.3 billion liters of ethanol for blending with gasoline. Leftover mash becomes food for livestock, the used-up yeast fish food. And out the top comes about 2,700 tons per day of nearly pure CO2.
This plant’s CO2 is not part of the world’s CO2 problem in the same way that CO2 from fossil fuel combustion is. The corn as it grew took carbon from the air, so putting it right back as CObalances the carbon ledger. Still, to stash such “biogenic” CO2 away permanently offsets some fossil CO2 emissions and, more importantly, paves the way toward isolating CO2 arising from coal and natural gas combustion.
The ADM plant is the centerpiece of a two-phase experiment. The first, the Illinois Basin–Decatur Project, finished a three-year run in November. It sent about one-third of the fermentation CO2 about 2,100 meters underground into a formation called the Mount Simon Sandstone. Named for a hill near Eau Claire, Wis., where it has an outcrop, the formation underlies most of Illinois and rests directly on the continent’s granite shield or basement rock. Money for the recent Decatur test came largely in a $67 million grant from the Department of Energy. Additional funding is from ADM and the carbon services division of the big oil service contractor Schlumberger.
Under a second DOE grant of $141.4 million plus $66.5 million from ADM and other partners, phase two will drill a second well and triple the amount of CO2 diverted down deep. To begin in 2015, that phase should render the plant’s ethanol works nearly free of CO2 emissions.
It is the first major proof-of-concept in the United States for the “S” part of what’s called Carbon Capture and Sequestration (alternatively, Storage). CCS is a central strategy under consideration by climate policy analysts for curbing global warming. Tests of the more challenging job of taking CO2 from the hot exhausts of gas- and coal-fired power plants are under way elsewhere (SN: 9/6/14, p. 22).
Robert Finley, an Illinois State Geological Survey researcher and director of the first phase of the Decatur project, says it illustrates every step of how to bury CO2 deep underground for thousands of years or more. “This is the real deal,” he says. His agency and the USGS maintain seismographic arrays near the well.
So far, ADM has pumped 1 million tons of CO2 into the ground over three years. That may seem like a lot. But it is nothing compared with what will be needed to dent the planet’s greenhouse gas emissions. A million tons of CO2 is about a fourth what the average coal plant emits yearly. The United States has more than 400 coal plants. To reduce American emissions by a billion tons — about 20 percent of the total from all sources including combustion of coal, natural gas and oil — would take thousands of injection facilities like Decatur’s. Annual worldwide emissions from all sources now run at nearly 40 billion tons. About 5.27 billion come from the United States, and fossil fuel-fired power plants account for about 40 percent of it.
In October, just before phase one ended, Scott McDonald, ADM’s biofuels development manager, stands in a cornfield near the plant and points to a fenced enclosure. A concrete lid with some electronic boxes on top covers a monitoring well near the injection site. Inside, geophones — which detect seismic vibrations — dangle more than 1.6 kilometers down. The gear allows engineers and earth scientists to monitor the growth of the underground plume of CO2 — now about 402 meters wide. The detectors map the tiny shudders and other signals of high-pressure CO2 squeezing its way into the sandstone. Inside a nearby shed a monitor displays a series of jagged lines. Earthquakes? “No, that’s just normal noise,” McDonald says. “We get hundreds of seismic signals a month, but not so far today. We never feel them.”
The largest yet recorded was about a magnitude 1.0. “To call these events earthquakes is an overstatement,” says geologist Finley.
Inside the plant’s CO2 compression hall, ear protection is a must. Elaborate plumbing splays across the floor and walls. The place looks a little like the engine room of a large ship. Some pipes have the girth of a horse. Two 2,424-kilowatt, four-stage, six-cylinder reciprocating compressors as big as tractor-trailers dominate the room. Large dehydrators remove water vapor from the CO2. The main pumps and an array of smaller compressors and blowers boost the purified CO2 to 9.8 megapascal pressure at a temperature of 35˚ Celsius. Nearby, a larger hall houses twice as much equipment waiting quietly for the second phase expansion due sometime this year.
Story continues below graph

Such conditions put CO2 in what physicists call a supercritical state. It is neither fully liquid nor gas, but instead a fluid with a density about 60 percent that of water that flows like gas. About 90 meters of pipe wind through the plant to the injection well. Topping it is a modest stack of lumpy blue valves and other plumbing about 2.7 meters high with large round handles.
Below the plant, the Illinois Basin’s sediments fill a wide dimple in ancient crystalline granite-rhyolite. The basin spans most of Illinois and much of Indiana to the east and Kentucky to the southeast, plus small parts of Missouri and Tennessee.
As it dives into the basin, the CO2 travels only about 45 meters to reach the deepest freshwater aquifer in the area. Continuing down to 450 meters is a coal seam. At about 600 meters begins the New Albany layer of shale — the first of three caprock layers that geologists say is impervious to any CO2 that might try to work its way back out. A second shale cap, the Maquoketa, lies at around 800 meters. At roughly 900 meters is an oil-bearing shale layer. At 1,500 meters comes the Eau Claire Shale, 150 meters thick and the main seal on the CO2. After that, the well casing enters the 500-meter-thick Mount Simon Sandstone. The final stretch of well casing is 13 percent chrome stainless steel alloy to resist corrosion. A series of perforations let the CO2 out below 2,100 meters. Beneath that the well penetrates the continental basement bedrock of the North American craton, Precambrian granite-rhyolite battered by scars accumulated over billions of years.  
The CO2 arrives with a pressure a few hundred pounds per square inch higher than is natural so deep. It flows down the final, 10-centimeter-diameter stretch of injection pipe at about 1,100 liters per minute. To the uninitiated, such injection may seem impossible. Finley shows off pieces of drill core from the Mount Simon Sandstone. Some are crumbly and look porous enough but many are pinkish hunks of rock that look and feel like concrete sidewalk. How can fluids be forced into such sturdy material for years on end?
The reason is pore space, the interconnected gaps between irregularly shaped mineral grains. The Mount Simon formation is about 22 percent pore space by volume. By one calculation, Mount Simon Sandstone has room for at least several centuries of all CO2 emissions from the Upper Midwest. The USGS estimatesthat it and similar formations around the nation could handle 500 times America’s present annual CO2emissions.  

Hidden faults

Sheer forces from the continental margins are wrenching the Illinois Basin, trying to drag the northern parts to the east northeast and the southern ones to the west southwest. Most of the tiny microseismic events seen since the Decatur sequestration pumps turned on tend to move in directions that relieve this stress — even if movements have only been a few fractions of a millimeter along cracks perhaps 10 meters long. Careful advance scrutiny by state and federal researchers revealed no perceptible significant faults near Decatur.
Microseismicity under Decatur is mostly at or below the boundary of the continental bedrock that begins 76 meters under the injection zone. More buoyant than water, supercritical CO2 has not moved downward, but the pressure clearly has.
So far, it seems, so good. However, some suspect that CO2 sequestration cannot work. A prime reason is that not every injection site, not for wastewater and presumably not for CO2, can be guaranteed to be far from hidden faults.
Stanford’s Zoback says the standard calculations that have convinced some that immense volumes of CO2 can be buried safely in the pore space of deep formations reflect “science that could be done by a fourth grader. They are leaving out one important fact,” he says. “Those pores are already filled with saline water. Where are you going to put that?”
In a paper he coauthored in the Proceedings of the National Academy of Sciences in 2012 and in testimonybefore Congress, Zoback offered another, larger reason for doubt. He argues that inevitable quakes from CO2 injection, while fairly small, may well open paths, or fractures, through low-porosity caprock. Movement on faults that extend through caprock, he says, could let most of the stored CO2 escape in coming centuries. That would gradually torpedo the point of injecting it.
Story continues below illustration

He also believes the cost of equipment able to store billions of tons of CO2 will prevent its construction any time soon. “By the time we bury billions of tons of carbon it will be too late for the climate.”  
Jonny Rutqvist of the Lawrence Berkeley National Laboratory in California and colleagues, in a series of recent papers, have modeled multiple scenarios of how sustained, large-scale deep injections of CO2might work out. While some earthquakes big enough to alarm local people (magnitude 3.0 to 4.5 at most) appear inevitable, very few would be large enough to damage buildings. Most CO2 for injection could be piped to reservoirs far from population centers. Plus, their studies suggest that if such reservoirs are under several independent layers of impervious caprock, very few if any quakes would open paths for significant CO2 to escape.
It will take a long time, he says, to get a full-scale global sequestration program going, plenty of time for a “learn-as-you-go” approach that could be modified or even abandoned if necessary.
If technical barriers fall and if restrictive regulations or government incentives change and give companies, including operators of gas- and coal-fired power plants, a good business reason to install carbon capture and sequestration equipment, then perhaps CO2 sequestration will become an immense industry in its own right. And perhaps the occasional rumble underfoot may, aside from rattling nerves, be a reassuring sign that humankind is sending fossil carbon in CO2 waste back underground where we found it.
Charles Petit is a freelance science writer based in Northern California.

When the Earth moves

The oil and gas industry operates more than 30,000 deep disposal wells injecting wastewater far below freshwater sources. In addition, many oil drillers increase yield by pumping in high-pressure, near-liquid CO2 that mixes with oil to make the oil flow better. Most of the time it works as planned. But problems can crop up.

Wastewater injection

Rocky Mountain Arsenal, Colo. | 1962–1966
A 3.7 kilometer waste disposal well drilled under U.S. Army supervision was followed by a rising tempo of earthquakes through the mid-1960s; a magnitude 5.0 quake in 1967 broke windows at the arsenal and closed schools in Denver; a magnitude 5.3 quake in August 1967 caused $500,000 in damage in Denver. Once water injection stopped, seismic activity gradually subsided. The episode, followed by experiments at a Chevron oil field in Colorado where scientists could turn quake activity up and down by changing pressure in injection wells, established scientifically that deep fluid injection may lead to earthquakes.
Raton Basin | 2001–Present
Sixteen magnitude 3.8 and greater quakes, including a 4.6 quake followed by a 5.3 six hours later in August 2011, struck after wastewater injections began in oil and gas fields of southern Colorado and New Mexico. Wells in the area inject unusually large amounts of wastewater; two wells about two kilometers from the source of the largest quakes were pumping many thousands of barrels of wastewater per month into underlying reservoirs.
Dallas-Fort Worth, Texas | October 2008–May 2009
A storm of small quakes, mostly of magnitude 3.3 or less, quivered through the region. They all had their origin about 400 meters from a well that started pumping brine into deep formations several weeks before the quakes began.
Prague, Okla. 
2011 quake damage
2011 quake damage in Sparks, Okla., near the Wilzetta Fault.
SUE OGROCKI/AP PHOTO
| November 6, 2011
The largest earthquake in the state’s history, a magnitude 5.7 on the Wilzetta Fault, destroyed 14 houses and injured two people. A 2014 report from the U.S. Geological Survey pinned it on a long history of wastewater injection in the area, saying a smaller magnitude 5.0 quake very close to the injection wells transferred stress to the larger fault, causing its failure.

Youngstown, Ohio | December 31, 2011
A magnitude 4.0 earthquake, the first-ever recorded for the community, struck near an injection well shortly after it began pumping.

COinjection for enhanced oil recovery

Cogdell Oil Field, near Snyder, Texas | 2006–2011
This area has seen flurries of small earthquakes blamed on wastewater injection. However, a distinct series of more than 90 quakes, 18 with magnitude 3.0 or above, including a 4.4, stands out. It began within two years of the start of deep injection of large volumes of supercritical CO2 and methane to stimulate oil flow. The link to CO2 — or possibly methane — presents a geology puzzle. Similar gas injection in nearby, seemingly identical oil fields was followed by no quake increase. — Charles Petit
No comments

12 factors research will go wrong.

Math illustration

Barriers to research replication are based largely in a scientific culture that pits researchers against each other in competition for scarce resources. Any or all of the factors below, plus others, may combine to skew results.

Pressure to publish

Research funds are tighter than ever and good positions are hard to come by. To get grants and jobs, scientists need to publish, preferably in big-name journals. That pressure may lead researchers to publish many low-quality studies instead of aiming for a smaller number of well-done studies. To convince administrators and grant reviewers of the worthiness of their work, scientists have to be cheerleaders for their research; they may not be as critical of their results as they should be.

Impact factor mania

For scientists, publishing in a top journal — such as NatureScience or Cell — with high citation rates or “impact factors” is like winning a medal. Universities and funding agencies award jobs and money disproportionately to researchers who publish in these journals. Many researchers say the science in those journals isn’t better than studies published elsewhere, it’s just splashier and tends not to reflect the messy reality of real-world data. Mania linked to publishing in high-impact journals may encourage researchers to do just about anything to publish there, sacrificing the quality of their science as a result.

Tainted cultures

Experiments can get contaminated and cells and animals may not be as advertised. In hundreds of instances since the 1960s, researchers misidentified cells they were working with. Contamination led to the erroneous report that the XMRV virus causes chronic fatigue syndrome, and a recent report suggests that bacterial DNA in lab reagents can interfere with microbiome studies.

Bad math

Do the wrong kinds of statistical analyses and results may be skewed. Some researchers accuse colleagues of “p-hacking,” massaging data to achieve particular statistical criteria. Small sample sizes and improper randomization of subjects or “blinding” of the researchers can also lead to statistical errors. Data-heavy studies require multiple convoluted steps to analyze, with lots of opportunity for error. Researchers can often find patterns in their mounds of data that have no biological meaning.

Sins of omission

To thwart their competition, some scientists may leave out important details. One study found that 54 percent of research papers fail to properly identify resources, such as the strain of animals or types of reagents or antibodies used in the experiments. Intentional or not, the result is the same: Other researchers can’t replicate the results.

Biology is messy

Variability among and between people, animals and cells means that researchers never get exactly the same answer twice. Unknown variables abound and make replicating in the life and social sciences extremely difficult.

Peer review doesn’t work

Peer reviewers are experts in their field who evaluate research manuscripts and determine whether the science is strong enough to be published in a journal. A sting conducted by Science found some journals that don’t bother with peer review, or use a rubber stamp review process. Another study found that peer reviewers aren’t very good at spotting errors in papers. A high-profile case of misconduct concerning stem cells revealed that even when reviewers do spot fatal flaws, journals sometimes ignore the recommendations and publish anyway (SN: 12/27/14, p. 25).
confidential file cartoon

Some scientists don’t share

Collecting data is hard work and some scientists see a competitive advantage to not sharing their raw data. But selfishness also makes it impossible to replicate many analyses, especially those involving expensive clinical trials or massive amounts of data.

Research never reported

Journals want new findings, not repeats or second-place finishers. That gives researchers little incentive to check previously published work or to try to publish those findings if they do. False findings go unchallenged and negative results — ones that show no evidence to support the scientist’s hypothesis — are rarely published. Some people fear that scientists may leave out important, correct results that don’t fit a given hypothesis and publish only experiments that do.

Poor training produces sloppy scientists

Some researchers complain that young scientists aren’t getting proper training to conduct rigorous work and to critically evaluate their own and others’ studies.

Mistakes happen

Scientists are human, and therefore, fallible. Of 423 papers retracted due to honest error between 1979 and 2011, more than half were pulled because of mistakes, such as measuring a drug incorrectly.

Fraud

Researchers who make up data or manipulate it produce results no one can replicate. However, fraud is responsible for only a tiny fraction of results that can’t be replicated. 
No comments

Is redoing scientific research the best way to find truth?

reproducing experiments

R. Allan Mufson remembers the alarming letters from physicians. They were testing a drug intended to help cancer patients by boosting levels of oxygen-carrying hemoglobin in their blood.
In animal studies and early clinical trials, the drug known as Epo (for erythropoietin) appeared to counteract anemia caused by radiation and chemo-therapy. It had the potential to spare patients from the need for blood transfusions. Researchers also had evidence that Epo might increase radiation’s tumor-killing power.
But when doctors started giving Epo or related drugs, called erythropoietic-stimulating agents, to large numbers of cancer patients in clinical trials, it looked like deaths increased. Physicians were concerned, and some stopped their studies early.
At the same time, laboratory researchers were collecting evidence that Epo might be feeding rather than fighting tumors. When other scientists, particularly researchers who worked for the company that made the drug, tried to replicate the original findings, they couldn’t.
Scientists should be able to say whether Epo is good or bad for cancer patients, but seven years later, they still can’t. The Epo debate highlights deeper trouble in the life sciences and social sciences, two fields where it appears particularly hard to replicate research findings. Replicability is a cornerstone of science, but too many studies are failing the test.
“There’s a community sense that this is a growing problem,” says Lawrence Tabak, deputy director of the National Institutes of Health. Early last year, NIH joined the chorus of researchers drawing attention to the problem, and the agency issued a plan and a call to action.
Unprecedented funding challenges have put scientists under extreme pressure to publish quickly and often. Those pressures may lead researchers to publish results before proper vetting or to keep hush about experiments that didn’t pan out. At the same time, journals have pared down the section in a published paper devoted to describing a study’s methods: “In some journals it’s really a methods tweet,” Tabak says. Scientists are less certain than ever that what they read in journals is true.
Many people say one solution to the problem is to have independent labs replicate key studies to validate their findings. The hope is to identify where and why things go wrong. Armed with that knowledge, the replicators think they can improve the reliability of published reports.
Others call that quest futile, saying it’s difficult — if not impossible — to redo a study exactly, especially when working with highly variable subjects, such as people, animals or cells. Repeating published work wastes time and money, the critics say, and it does nothing to advance knowledge. They’d prefer to see questions approached with a variety of different methods. It’s the general patterns and basic principles — the reproducibility of a finding, not the precise replication of a specific experiment — that really matter.
It seems that everyone has an opinion about the underlying causes leading to irreproducibility, and many have offered solutions. But no one really knows entirely what is wrong or if any of the proffered fixes will work.
Much of the controversy has centered on the types of statistical analyses used in most scientific studies, and hardly anyone disputes that the math is a major tripping point. An influential 2005 paper looking at the statistical weakness of scientific studies generated much of the self-reflection taking place within the medical community over the last decade. While those issues still exist, especially as more complex analyses are applied to big data studies, there remain deeper problems that may be harder to fix.

Taking sides

Epo researchers weren’t the first to find discrepancies in their results, but their experience set the stage for much of the current controversy.
Story continues below infographic


Mufson, head of the National Cancer Institute’s Cancer Immunology and Hematology Etiology Branch, organized a two-day workshop in 2007 where academic, government and pharmaceutical company scientists, clinicians and patient advocates discussed the Epo findings.
A divide quickly emerged between pharmaceutical researchers and scientists from academic labs, says Charles Bennett, an oncologist at the University of South Carolina.
Bennett was part of a team that had reported in 2005 that erythropoietin reduced the need for blood transfusions and possibly improved survival among cancer patients. But he came to the meeting armed with very different data. He and colleagues found that erythropoietin and darbepoetin used to treat anemia in cancer patients raised the risk of blood clots by 57 percent and the risk of dying by about 10 percent. Others found that people with breast or head and neck cancers died sooner than other cancer patients if they took Epo.
Those who argued that Epo was harmful to patients cited cellular mechanisms: tumor cells make more Epo receptors than other cells. More receptors, the researchers feared, meant the drug was stimulating growth of the cancer cells, a finding that might explain why patients were dying.
Company scientists from Amgen, which makes Epo drugs, charged that they had tried and could not replicate the results published by the academic researchers. After listening to the researchers hash through data for two days, Bennett could see why there was conflict. The company and academic scientists couldn’t even agree on what constituted growth of tumor cells, or on the correct tools for detecting Epo receptors on tumor cells, he says. That disconnect meant neither side would be able to confirm the other’s findings, nor could they completely discount the results. The meeting ended with a list of concerns and direction for future studies, but little consensus.
“I went in thinking it was black and white,” Bennett says. “Now, I’m very much convinced it’s a gray answer and everybody’s right.”
From there, pressure continued to build. In 2012, Amgen caused shock waves by reporting that it could independently confirm only six of 53 “landmark” papers on preclinical cancer studies. Replicating results is one of the first steps companies take before investing in further development of a drug. Amgen will not disclose how it conducted the replication experiments or even which studies it tried to replicate. Bennett suspects the controversial Epo experiments were among the chosen studies, perhaps tinting the results.

Amgen’s revelation came on the heels of a similar report from the pharmaceutical company Bayer. In 2011, three Bayer researchers reported in Nature Reviews Drug Discovery that company scientists could fully replicate only about 20 to 25 percent of published preclinical cancer, cardiovascular and women’s health studies. Like Amgen, Bayer did not say which studies it attempted to replicate. But those inconsistencies could mean the company would have to drop projects or expend more resources to validate the original reports.
Scientists were already uneasy because of a well-known 2005 essay by epidemiologist John Ioannidis, now at Stanford University. He had used statistical arguments to contend that most research findings are false. Faulty statistics often indicate a finding is true when it is not. Those falsely positive results usually don’t replicate.
Academic scientists have had no easier time than drug companies in replicating others’ results. Researchers at MD Anderson Cancer Center in Houston surveyed their colleagues about whether they had ever had difficulty replicating findings from published papers. More than half, 54.6 percent, of the 434 respondents said that they had, the survey team reported in PLOS ONE in 2013. Only a third of those people were able to correct the discrepancy or explain why they got different answers.
“Those kinds of studies are sort of shocking and worrying,” says Elizabeth Iorns, a biologist at the University of Miami in Florida and chief executive officer for Science Exchange, a network of labs that attempt to independently validate research results.
Over the long term, science is a self-correcting process and will sort itself out, Tabak and NIH director Francis Collins wrote last January in Nature. “In the shorter term, however, the checks and balances that once ensured scientific fidelity have been hobbled. This has compromised the ability of today’s researchers to reproduce others’ findings,” Tabak and Collins wrote. Myriad reasons for the failure to reproduce have been given, many involving the culture of science. Fixing the problem is going to require a more sophisticated understanding of what’s actually wrong, Ioannidis and others argue.
Two schools of thought
Researchers don’t even agree on whether it is necessary to duplicate studies exactly, or to validate the underlying principles, says Giovanni Parmigiani, a statistician at the Dana-Farber Cancer Institute in Boston. Scientists have two schools of thought about verifying someone else’s results: replication and reproducibility. The replication school teaches that researchers should retrace all of the steps in a study from data generation through the final analysis to see if the same answer emerges. If a study is true and right, it should.
Proponents of the other school, reproducibility, contend that complete duplication only demonstrates whether a phenomenon occurs under the exact conditions of the experiment. Obtaining consistent results across studies using different methods or groups of people or animals is a more reliable gauge of biological meaningfulness, the reproducibility school teaches. To add to the confusion, some scientists reverse the labels.
Timothy Wilson, a social psychologist at the University of Virginia in Charlottesville, is in the reproducibility camp. He would prefer that studies extend the original findings, perhaps modifying variables to learn more about the underlying principles. “Let’s try to discover something,” he says. “This is the way science marches forward. It’s slow and messy, but it works.”
But Iorns and Brian Nosek, a psychologist and one of Wilson’s colleagues at the University of Virginia, are among those who think exact duplication can move research in the right direction.
In 2013, Nosek and his former student Jeffrey Spies cofounded the Center for Open Science, with the lofty goal “to increase openness, integrity and reproducibility of scientific research.” Their approach was twofold: provide infrastructure to allow scientists to more easily and openly share data and conduct research projects to repeat studies in various disciplines in science.
Soon, Nosek and Iorns’ Science Exchange teamed up to replicate 50 of the most important (defined as highly cited) cancer studies published between 2010 and 2012. On December 10, 2014, the Reproducibility Project: Cancer Biology kicked off when three groups announced in the journal eLife their intention to replicate key experiments from previous studies and shared their plans for how to do it.
Iorns, Nosek and their collaborators hope the effort will give scientists a better idea of the reliability of these studies. If the replication efforts fail, the researchers want to know why. It’s possible that the underlying biology is sound, but that some technical glitch prevents successful replication of the results. Or the researchers may have been barking up the wrong tree. Most likely the real answer is somewhere in the middle.
Neuroscience researchers realized the value of duplicating studies with therapeutic potential early on. In 2003, the National Institute of Neuro-logical Disorders and Stroke contracted labs to redo some important spinal cord injury studies that showed promise for helping patients. Neuro-scientist Oswald Steward of the University of California, Irvine School of Medicine heads one of the contract labs.
Story continues below table

Parmigiani says both paths toward truth are important. At any rate, researchers will never get science completely right, he contends. “We’re always in a gray area between perfect truth and complete falsehood,” he says. The best researchers can do is edge closer to truth.
Of the 12 studies Steward and colleagues tried to copy, they could fully replicate only one. And only after the researchers determined that the ability of a drug to limit hemorrhaging and nerve degeneration near an injury depended upon the exact mechanism that produced the injury. Half the studies could not be replicated at all and the rest were partially replicated, or produced mixed or less robust results than the originals, according to a 2012 report in Experimental Neurology.
Notably, the researchers cited 11 reasons that might account for why previous studies failed to replicate; only one was that the original study was wrong. Exact duplications of original studies are impossible, Steward and his colleagues contend.

Acts of aggression

Before looking at cancer studies, Nosek investigated his own field with a large collaborative research project. In a special issue of Social Psychology published last April, he and other researchers reported results of 15 replication studies testing 26 psychological phenomena. Of 26 original observations tested, they could only replicate 10. That doesn’t mean the rest failed entirely; several of the replication studies got similar or mixed results to the original, but they couldn’t qualify as a success because they didn’t pass statistical tests.
Simone Schnall conducted one of the studies that other researchers claimed they could not replicate. Schnall, a social psychologist at the University of Cambridge, studies how emotions affect judgment.
She has found that making people sit at a sticky, filthy desk or showing them revolting movie scenes not only disgusts them, it makes their moral judgment harsher. In 2008, Schnall and colleagues examined disgust’s flip side, cleanliness, and found that hand washing made people’s moral judgments less harsh.
M. Brent Donnellan, one of the researchers who attempted to replicate Schnall’s 2008 findings, blogged before the replication study was published that his group made two unsuccessful attempts to duplicate Schnall’s original findings. “We gave it our best shot and pretty much encountered an epic fail as my 10-year-old would say,” he wrote. When Schnall and others complained that the comments were unprofessional and pointed out several possible reasons the study failed to replicate, Donnellan, a psychologist at Texas A&M University in College Station, apologized for the remark, calling it “ill-advised.”
Schnall’s criticism set off a flurry of negative remarks from some researchers, while others leapt to her defense. The most vociferous of her champions have called replicators “bullies,” “second-stringers” and worse. The experience, Schnall said, has damaged her reputation and affected her ability to get funding; when decision makers hear about the failed replication they suspect she did something wrong.
“Somehow failure to replicate is viewed as more informative than the original studies,” says Wilson. In Schnall’s case, “For all we know it was an epic fail on the replicators’ part.”
The scientific community needs to realize that it is difficult to replicate a study, says Ioannidis. “People should not be shamed,” he says. Every geneticist, himself included, has published studies purporting to find genetic causes of disease that turned out to be wrong, he says.
Iorns is not out to stigmatize anyone, she says. “We don’t want people to feel like we’re policing them or coming after them.” She aims to improve the quality of science and scientists. Researchers should be rewarded for producing consistently reproducible results, she says. “Ultimately it should be the major criteria by which scientists are assessed. What could be more important?”

Variable soup

Much of the discussion of replicability has centered on social and cultural factors that contribute to publication of irreplicable results, but no one has really been discussing the mechanisms that may lead replication efforts to fail, says Benjamin Djulbegovic, a clinical researcher at the University of South Florida in Tampa. He and long-time collaborator mathematician Iztok Hozo of Indiana University Northwest in Gary have been mulling over the question for years, Djulbegovic says.
They were inspired by the “butterfly effect,” an illustration of chaos theory that one small action can have major repercussions later. The classic example holds that a butterfly flapping its wings in Brazil can brew a tornado in Texas. Djulbegovic hit on the idea that there’s chaos at work in most biology and psychology studies as well.
Changing even a few of the original conditions of an experiment can have a butterfly effect on the outcome of replication attempts, he and Hozo reported in June in Acta Informatica Medica. The two researchers considered a simplified case in which 12 factors may affect a doctor’s decision on how to treat a patient. The researchers focused on clinical decision making, but the concept is applicable to other areas of science, Djulbegovic says. Most of the factors, such as the decision maker’s time pressure (yes or no) or cultural factors (present or not important) have two possible starting places. The doctor’s individual characteristics — age (old or young), gender (male or female) — could have four combinations and the decision maker’s personality had five different modes. All together, those dozen initial factors make up 20,480 combinations that could represent the initial conditions of the experiment.
That didn’t even include variables about where exams took place, the conditions affecting study participants (Were they tired? Had they recently fought with a loved one or indulged in alcohol?), or the handling of biological samples that might affect the diagnostic test results. Researchers have good and bad days too. “You may interview people in the morning, but nobody controls how well you slept last night or how long it took you to drive to work,” Djulbegovic says. Those invisible variables may very well influence the medical decisions made that day and therefore affect the study’s outcome.
Djulbegovic and Hozo varied some of the initial conditions in computer simulations. If initial conditions varied between experiments by 2.5 conditions or less, the results were highly consistent, or replicable. But changing 3.5 to four initial factors gave answers all over the map, indicating that very slight changes in initial conditions can render experiments irreproducible.
The study is not rigorous mathematical proof, Djulbegovic says. “We just sort of put some thoughts in writing.”

Balancing act

Failed replications may offer scientists some valuable insights, Steward says. “We need to recognize that many results won’t make it through the translational grist mill.” In other words, a therapy that shows promise under specific conditions, but can’t be replicated in other labs, is not ready to be tried in humans.
“In many cases there’s a biological story there, but it’s a fragile one,” Steward says. “If it’s fragile, it’s not translatable.”
That doesn’t negate the original findings, he says. “It changes our perspective on what it takes to get a translatable result.”
Nosek, proponent of replication that he is, admits that scientists need room for error. Requiring absolute replicability could discourage researchers from ever taking a chance, producing only the tiniest of incremental advances, he says. Science needs both crazy ideas and careful research to succeed.
“It’s totally OK that you have this outrageous attempt that fails,” Nosek says. After all, “Einstein was wrong about a lot of stuff. Newton. Thomas Edison. They had plenty of failures, too.” But science can’t survive on bold audacity alone, either. “We need a balance of innovation and verification,” Nosek says.
How best to achieve that balance is anybody’s guess. In their January 2014 paper, Collins and Tabak reviewed NIH’s plan, which includes training modules for teaching early-career scientists the proper way to do research, standards for committees reviewing research proposals and an emphasis on data sharing. But the funding agency can’t change things alone.
In November, in response to the NIH call to action, more than 30 major journals announced that they had adopted a set of guidelines for reporting results of preclinical studies. Those guidelines include calls for more rigorous statistical analyses, detailed reporting on how the studies were done, and a strong recommendation that all datasets be made available upon request.
Ioannidis offered his own suggestions in the October PLOS Medicine. “We need better science on the way science is done,” he says. He helped start the Meta-Research Innovation Center at Stanford to conduct research on research and figure out how to improve it.
In the decade since he published his assertion of the wrongness of research, Ioannidis has seen change. “We’re doing better, but the challenges are even bigger than they were 10 years ago,” he says.
He is reluctant to put a number on science’s reliability as a whole, though. “If I said 55 to 65 percent [of results] are not replicable, it would not do justice to the fact that some types of scientific results are 99 percent likely to be true.”
Science is not irrevocably broken, he asserts. It just needs some improvements.
“Despite the fact that I’ve published papers with pretty depressive titles, I’m actually an optimist,” Ioannidis says. “I find no other investment of a society that is better placed than science.”
No comments