You are on page 1of 7

scientificamerican.

com
Einstein's Greatest Theory Validated on a Galactic Scale
Maya Miller
7-8 minutes

Three years ago astrophysicist Tom Collett set out to test a theory. Not just any
theory, but one that sets scientists’ expectations for how the universe operates at
large: Einstein’s general relativity. First published in 1915, the theory
mathematically describes how gravity emerges from the fundamental geometry of space
and time, or spacetime, as physicists call it. It postulates that dense objects,
such as Earth and the sun, create valleylike dips in spacetime that manifest as
gravity—the force that binds together a galaxy’s swirling stars, places planets
around suns and, on Earth (or any other planet), keeps your feet on the ground.

Einstein’s equations underpin a host of real-world applications such as the global


positioning satellites that make precise navigation and split-second financial
transactions possible around the planet. They also elucidate several otherwise-
inexplicable phenomena, including Mercury’s oddball orbit, as well as predict new
ones, such as gravitational waves—ripples in spacetime that were only directly
observed a century after general relativity’s debut. In test after test, whether
here on Earth or in observations of the distant universe, the theory has emerged
unscathed—a success so stunningly unshakeable it draws a certain breed of
scientists like moths to a flame—each seeking to reveal cracks in Einstein’s
edifice that could lead to the next breakthrough in physics.

Collett, a research fellow at the University of Portsmouth in England, is among


them. “General relativity is so fundamental to the assumptions we make in our
interpretation of cosmological and astrophysical data sets that we’d better be sure
it’s right,” he says. With that mind-set, in 2015 Collett partnered with nine
colleagues to perform the most sensitive experiment yet to test whether Einstein’s
famed theory holds up at the scale of an entire galaxy. Their results, published
June 21 in Science, reiterate Einstein’s theory still reigns supreme.

Even so, it is no secret general relativity is in some respects incomplete, and


perhaps even fundamentally flawed: It cannot, for instance, explain conditions
inside a black hole or during the first instants of the big bang. The theory also
has a complicated relationship with a fundamental tenet of modern astronomy and
cosmology—the notion that the universe is suffused with dark matter, a mysterious,
invisible substance that only interacts with normal matter through gravity.

Astronomers found the first hints of dark matter in the 1930s, spying stars
whipping around other galaxies so fast that they should have spun off into
intergalactic space; some hidden gravitational hand—dark matter—must be holding
them in place. That, or general relativity’s accounting of gravity somehow breaks
down at galactic scales, leading to theory-defying stellar motions at a galaxy’s
outskirts. Similarly, in 1998 cosmologists found evidence the universe is expanding
faster than expected, driven by an even more mysterious dark energy. In 2011 that
research netted a Nobel Prize, but its validity hinges on general relativity being
the correct description of gravity at cosmological scales.

For their test, Collett and his collaborators focused on two galaxies in a
coincidental celestial alignment, with one directly in front of the other along an
Earthbound observer’s line of sight. In keeping with general relativity, the
“foreground” galaxy’s great bulk warps the surrounding fabric of spacetime, forming
a “gravitational lens” that distorts and magnifies the far-distant background
galaxy’s light. Precisely measure those distortions, and you gain a good sense of
how much mass the foreground galaxy should contain according to general relativity.
Collett’s key insight was this estimate could be readily double-checked by
monitoring the motions of stars in the foreground galaxy, yielding an independent
mass measurement. Although hundreds of galactic gravitational lenses are known,
only a few are sufficiently close by to allow their individual stars to be seen. At
just 450 million light-years away—a relative stone’s throw in cosmological terms—
this particular foreground galaxy was an ideal candidate for such observations,
Collett says. “The original Eureka! moment was when I realized, we can measure
this,” he says.

Those measurements required marshaling the combined power of the world’s two most
advanced optical instruments—NASA’s Hubble Space Telescope in low Earth orbit and
the European Southern Observatory’s Very Large Telescope (VLT) in the Chilean
Andes. Collett’s team used Hubble to measure the foreground galaxy’s mass via
gravitational lensing and the VLT to measure its mass via the speeds of stars
twirling around its edges. After carefully analyzing and comparing the data, they
found a striking agreement between these independent mass measurements. With an
error margin of just 9 percent, the experiment’s findings are the most precise
measurement of general relativity beyond our solar system to date. The results also
indirectly support the theory’s validity in the face of dark matter, dark energy
and other cosmological curveballs.

“This is one more feather in the cap for general relativity, that’s true—but it’s
not true that this really disfavors any other theory” besides a narrow subset of
alternative explanations for gravity, says Stacy McGaugh, an astrophysicist at Case
Western Reserve University who was not associated with the study. The real value of
this new result, he says, comes from its unprecedented scale and precision, which
are of relevance regardless of what one’s preferred theory might be. McGaugh
should know—he is one of the more open-minded researchers when it comes to
alternatives to dark matter and general relativity’s description of gravity. He
studies a class of dim, diffuse galaxies that appear to defy some tenets of those
theories. “This is another test that any theory you want to build has to satisfy,”
he says.

For now, says Tommaso Treu, an expert in gravitational lensing at the University of
California, Los Angeles, who is unaffiliated with Collett’s study, any scientists
struggling to overturn the unfinished revolution that Einstein began in 1915 must
remember that dismissing a time-tested, century-old theory would be an
extraordinary achievement requiring equally extraordinary evidence. “Everyone would
love to prove Einstein wrong,” Treu says. “There is no better way to be famous.”

That sentiment rings true for Collett, who says he had hoped the results would
diverge from expectations set by general relativity. To that end, he is already
working on a follow-up experiment, one using a different gravitational lens
slightly farther away from Earth to test general relativity all over again.

“Overturning the consensus is usually very, very difficult at first—but usually


pays off greatly,” Treu says.

scientificamerican.com
Biases Make People Vulnerable to Misinformation Spread by Social Media
Giovanni Luca Ciampaglia,Filippo Menczer,The Conversation US
9-11 minutes

The following essay is reprinted with permission from The Conversation, an online
publication covering the latest research.

Social media are among the primary sources of news in the U.S. and across the
world. Yet users are exposed to content of questionable accuracy, including
conspiracy theories, clickbait, hyperpartisan content, pseudo science and even
fabricated “fake news” reports.

It’s not surprising that there’s so much disinformation published: Spam and online
fraud are lucrative for criminals, and government and political propaganda yield
both partisan and financial benefits. But the fact that low-credibility content
spreads so quickly and easily suggests that people and the algorithms behind social
media platforms are vulnerable to manipulation.

Explaining the tools developed at the Observatory on Social Media.

Our research has identified three types of bias that make the social media
ecosystem vulnerable to both intentional and accidental misinformation. That is why
our Observatory on Social Media at Indiana University is building tools to help
people become aware of these biases and protect themselves from outside influences
designed to exploit them.
Bias in the brain

Cognitive biases originate in the way the brain processes the information that
every person encounters every day. The brain can deal with only a finite amount of
information, and too many incoming stimuli can cause information overload. That in
itself has serious implications for the quality of information on social media. We
have found that steep competition for users’ limited attention means that some
ideas go viral despite their low quality—even when people prefer to share high-
quality content.

To avoid getting overwhelmed, the brain uses a number of tricks. These methods are
usually effective, but may also become biases when applied in the wrong contexts.

One cognitive shortcut happens when a person is deciding whether to share a story
that appears on their social media feed. People are very affected by the emotional
connotations of a headline, even though that’s not a good indicator of an article’s
accuracy. Much more important is who wrote the piece.

To counter this bias, and help people pay more attention to the source of a claim
before sharing it, we developed Fakey, a mobile news literacy game (free on Android
and iOS) simulating a typical social media news feed, with a mix of news articles
from mainstream and low-credibility sources. Players get more points for sharing
news from reliable sources and flagging suspicious content for fact-checking. In
the process, they learn to recognize signals of source credibility, such as
hyperpartisan claims and emotionally charged headlines.
Bias in society

Another source of bias comes from society. When people connect directly with their
peers, the social biases that guide their selection of friends come to influence
the information they see.

In fact, in our research we have found that it is possible to determine the


political leanings of a Twitter user by simply looking at the partisan preferences
of their friends. Our analysis of the structure of these partisan communication
networks found social networks are particularly efficient at disseminating
information – accurate or not – when they are closely tied together and
disconnected from other parts of society.

The tendency to evaluate information more favorably if it comes from within their
own social circles creates “echo chambers” that are ripe for manipulation, either
consciously or unintentionally. This helps explain why so many online conversations
devolve into “us versus them” confrontations.
To study how the structure of online social networks makes users vulnerable to
disinformation, we built Hoaxy, a system that tracks and visualizes the spread of
content from low-credibility sources, and how it competes with fact-checking
content. Our analysis of the data collected by Hoaxy during the 2016 U.S.
presidential elections shows that Twitter accounts that shared misinformation were
almost completely cut offfrom the corrections made by the fact-checkers.

When we drilled down on the misinformation-spreading accounts, we found a very


dense core group of accounts retweeting each other almost exclusively – including
several bots. The only times that fact-checking organizations were ever quoted or
mentioned by the users in the misinformed group were when questioning their
legitimacy or claiming the opposite of what they wrote.
Bias in the machine

The third group of biases arises directly from the algorithms used to determine
what people see online. Both social media platforms and search engines employ them.
These personalization technologies are designed to select only the most engaging
and relevant content for each individual user. But in doing so, it may end up
reinforcing the cognitive and social biases of users, thus making them even more
vulnerable to manipulation.

For instance, the detailed advertising tools built into many social media platforms
let disinformation campaigners exploit confirmation bias by tailoring messages to
people who are already inclined to believe them.

Also, if a user often clicks on Facebook links from a particular news source,
Facebook will tend to show that person more of that site’s content. This so-called
“filter bubble” effect may isolate people from diverse perspectives, strengthening
confirmation bias.

Our own research shows that social media platforms expose users to a less diverse
set of sources than do non-social media sites like Wikipedia. Because this is at
the level of a whole platform, not of a single user, we call this the homogeneity
bias.

Another important ingredient of social media is information that is trending on the


platform, according to what is getting the most clicks. We call this popularity
bias, because we have found that an algorithm designed to promote popular content
may negatively affect the overall quality of information on the platform. This also
feeds into existing cognitive bias, reinforcing what appears to be popular
irrespective of its quality.

All these algorithmic biases can be manipulated by social bots, computer programs
that interact with humans through social media accounts. Most social bots, like
Twitter’s Big Ben, are harmless. However, some conceal their real nature and are
used for malicious intents, such as boosting disinformation or falsely creating the
appearance of a grassroots movement, also called “astroturfing.” We found evidence
of this type of manipulation in the run-up to the 2010 U.S. midterm election.

To study these manipulation strategies, we developed a tool to detect social bots


called Botometer. Botometer uses machine learning to detect bot accounts, by
inspecting thousands of different features of Twitter accounts, like the times of
its posts, how often it tweets, and the accounts it follows and retweets. It is not
perfect, but it has revealed that as many as 15 percent of Twitter accounts show
signs of being bots.

Using Botometer in conjunction with Hoaxy, we analyzed the core of the


misinformation network during the 2016 U.S. presidential campaign. We found many
bots exploiting both the cognitive, confirmation and popularity biases of their
victims and Twitter’s algorithmic biases.

These bots are able to construct filter bubbles around vulnerable users, feeding
them false claims and misinformation. First, they can attract the attention of
human users who support a particular candidate by tweeting that candidate’s
hashtags or by mentioning and retweeting the person. Then the bots can amplify
false claims smearing opponents by retweeting articles from low-credibility sources
that match certain keywords. This activity also makes the algorithm highlight for
other users false stories that are being shared widely.
Understanding complex vulnerabilities

Even as our research, and others’, shows how individuals, institutions and even
entire societies can be manipulated on social media, there are many questions left
to answer. It’s especially important to discover how these different biases
interact with each other, potentially creating more complex vulnerabilities.

Tools like ours offer internet users more information about disinformation, and
therefore some degree of protection from its harms. The solutions will not likely
be only technological, though there will probably be some technical aspects to
them. But they must take into account the cognitive and social aspects of the
problem.

This article was originally published on The Conversation. Read the original
article.

Giovanni Luca Ciampaglia

Assistant Research Scientist, Indiana University Network Science Institute, Indiana


University.

Filippo Menczer

Professor of Computer Science and Informatics; Director of the Center for Complex
Networks and Systems Research, Indiana University.

scientificamerican.com
The Healthy Addiction? Coffee Study Finds More Health Benefits
David Noonan
5-6 minutes

It’s enough to make a tea drinker buy an espresso machine. In a new study
scientists in Germany report they were able to modify a common age-related defect
in the hearts of mice with doses of caffeine equivalent to four to five cups of
coffee a day for a human. The paper—the latest addition to a growing body of
research that supports the health benefits of drinking coffee—describes how the
molecular action of caffeine appears to enhance the function of heart cells and
protect them from damage.

It remains to be seen whether these findings will ultimately have any bearing on
humans, but Joachim Altschmied of Heinrich Heine-University in Düesseldorf, who led
the study with his colleague Judith Haendeler, says “the old idea that you
shouldn’t drink coffee if you have heart problems is clearly not the case anymore.”

Previous research had suggested as much. For example, a 2017 report in the Annual
Review of Nutrition, which analyzed the results of more than 100 coffee and
caffeine studies, found coffee was associated with a probable decreased risk of
cardiovascular disease—as well as type 2 diabetes and several kinds of cancer. The
new paper, published Thursday in PLOS Biology, identifies a specific cellular
mechanism by which coffee consumption may improve heart health.

The study builds on earlier work in which the two scientists showed caffeine ramps
up the functional capacity of the cells that line blood vessels. The drug does so
by getting into cells and stoking the mitochondria, structures within the cells
that burn oxygen as they turn glucose into energy.“Mitochondria are the powerhouses
of the cells,” Haendeler says. One of the things they run on is a protein known as
p27. As Haendeler and Altschmied discovered (and describe in the current paper),
caffeine works its magic in the major types of heart cells by increasing the amount
of p27 in their mitochondria.

After the researchers induced myocardial infarction in the mice during their
experiments, the extra stores of p27 in the caffeinated cells apparently prevented
damaged heart muscle cells from dying. The paper says the mitochondrial p27 also
triggered the creation of cells armed with strong fibers to withstand mechanical
forces, and promoted repairs to the linings of blood vessels and the inner chambers
of the heart. To confirm the protein’s importance, the scientists engineered mice
with a p27 deficiency. Those mice were found to have impaired mitochondrial
function that did not improve with caffeine.

The researchers also looked at caffeine’s potential role in modifying a common


effect of aging in mice and humans: reduced respiratory capacity among
mitochondria. (In this context “respiratory” refers to a complex sequence of
biochemical events within the organelle .)

For this part of the experiment, 22-month-old mice received caffeine—the daily
equivalent of four to five cups of coffee in humans—in their drinking water for 10
days. That was sufficient to raise their mitochondrial respiration to the levels
observed in six-month-old mice, according to the study. Analysis showed the old
mice had roughly double the amount of p27 in their mitochondria after the 10 days
of caffeine.

Although this latest news about the potential health benefits of coffee involves
just a single animal study, tea drinkers might well feel they are coming out on the
wrong end of the coffee equation. According to the National Coffee Association, 64
percent of Americans 18 and over drink at least one cup of coffee a day, with an
average daily consumption of 3.2 cups. Three cups of a typical breakfast tea
contain less than 150 milligrams of caffeine, compared with the nearly 500
milligrams in the same amount of brewed coffee. So tea drinkers might wonder if
they are missing out on a potential health benefit and should start drinking the
other stuff.

“Absolutely not,” says Donald Hensrud, medical director of Mayo Clinic's Healthy
Living Program. “You have to enjoy life, and if you enjoy tea, keep on enjoying it.
It’s all good. There are health benefits to coffee, to black tea and to green tea.”
But there can also be problems associated with higher doses of caffeine, he notes.
The amount in more than two cups of coffee a day, for example, can interfere with
conception and increase the risk of miscarriage. And, he says, because individuals
metabolize caffeine at different rates, slow metabolizers may be more susceptible
to side effects such as heartburn, insomnia, heart palpitations and irritability.

Haendeler, who drinks six cups of coffee a day, says it can be part of a healthy
lifestyle—but is no miracle cure. And she is quick to point out there are no
shortcuts to good health. “If you hear about this study and decide to drink coffee
but you do nothing else—no exercise, no proper diet—then, of course, this will not
work,” she says. “You cannot simply decide, ‘Okay, I’m sitting here and drinking
four, five or six cups of coffee and everything is fine.'”
David Noonan

David Noonan is a freelance writer specializing in science and medicine.

Credit: Nick Higgins

You might also like