Disinformation

Your opinion at the start - stage 1/6

Key

Read more information.

Disinformation is “the deliberate creation and sharing of false and/or manipulated information that is intended to deceive and mislead audiences, either for the purposes of causing harm, or for political, personal or financial gain”. It has been described as a threat to personal freedom and to democracy but few people are fully aware of how it works. There is disagreement on how, if at all, it should be regulated.

Read background

Researched and written by Paul Eustice. Edited by Perry Walker. We are very grateful to Maeve Walsh of the Carnegie UK Trust for her expert advice. This is a fast-moving area, so we should say that the text was finalised at the end of July 2019.

Drag these using the arrow symbol () so that they are in order, most preferred at the top

  • Regulation is unnecessary
  • By better self-regulationmore

    This means that organisations modify their practices and processes in response to identified harms, without oversight by or accountability to an external body

  • By relying on increased media literacymore

    Social media firms might be willing to fund a levy to support media literacy in return for remaining unregulated.

  • By appointing an independent regulator more

    This would be someone ‘at arm’s length’ from government but accountable to it

What is disinformation?

The House of Commons Committee on Disinformation reported in February 2019 and the definition on the previous screen is from their report. Disinformation is, objectively viewed by a third party:

  • deliberately dishonest
  • always harmful.

But they carefully separate ‘disinformation’ from ‘misinformation’, which they define as “inadvertent sharing of false information.” (1:12) and from ‘fake news’.

What is ‘fake news'’?

‘Fake news’ refer to both passing on false news by genuine mistake and, more dangerously, to refusing to acknowledge as true a report that is accurate but which we don’t like or find inconvenient

It is a dangerous term because it encourages the idea that ‘mainstream media’ are corrupt or biased so what they say should be ignored. This encourages a scepticism that, ironically, makes people more vulnerable to propaganda, as they stop reading factual reports from authoritative sources and rely instead on material shared by friends and contacts over social media, with no yardstick by which to measure ‘truth’ or reliability.Disinformation can then be fed into that situation, to exploit it. There is a deliberate movement by some politicians to encourage this distrust:

President Trump says reporters are among "the most dishonest people in the world." Research firm Gallup tells us two out of every three U.S. adults don’t believe the news and that trust in mass media has sunk to an all-time low.(2)

Why people spread disinformation

Disinformation is generated for a number of reason:

- to grab attention and, once a reader has clicked on a story…

- to then promote advertising or products to that user. For example, the Infowars website deliberately spread the lie that a massacre at a US school did not happen. With the attention thus gained, itsold products online. (38, 82)

- to undermine trust in authority for political or social disruption, for example by spreading stories that directly counteract official sources

- to target people with particular political views, with the intention of influencing their voting intention at the time of an election.

Political disinformation

This is disinformation to promote a political agenda. Demonstrably incorrect statements include the claims “that former US President Barack Obama was born in Kenya and that Pope Francis endorsed the candidacy of Donald Trump”. (3 pp12-13)

The harms of disinformation

In April 2019 the government published a White Paper on Online Harms. Disinformation was one of the problems listed because it “can threaten public safety, undermine national security, fracture community cohesion and reduce trust.” -(4:1.24)

When the internet is deliberately used to spread false or misleading information, it can harm us in many different ways, encouraging us to make decisions that could damage our health, undermining our respect and tolerance for each other and confusing our understanding of what is happening in the wider world. It can also damage our trust in our democratic institutions, including Parliament. (ibid p70)

Why it is hard to spot disinformation

“It is hard to differentiate on social media between content that is true, that is misleading, or that is false, especially when those messages are targeted at an individual level” (1:302). In addition, when a post has been shared thousands of times, people are more likely to assume that it is verified.

Part of the problem lies in the way algorithms work.Algorithms (see definition below if the term is new to you) decide what information an individual receives. They allow for ‘micro-targeting’ directly personal to one user, who constantly provides information, often unknowingly, that allows the process to be refined and thus more effective.Robots then increase some kinds of information and reject others. This creates a kind of ‘filter bubble’ and ‘echo chamber’ in which a user’s prejudices are confirmed and extended and their world narrowed. (See below for definitions of these terms.)

You Tube were criticised for allowing conspiracy theories and false news to attain their top ranking. In response, they altered their alogrithims to control content that “comes close to — but doesn’t quite cross the line” of violating their rules. Such content will not be deleted, but will be harder to find. They have also “developed software to stop conspiracy theories from going viral during breaking news events”(5)

David and Goliath

Large social media firms can be richer and more powerful than national governments (36) and even when they are not, powerful politicians can be complicit with them. In asking whether we should regulate them, we also need to ask whether we can.

Definition of key terms

Algorithm. An algorithm is a set of instructions telling a computer how to organize a body of data—in this case, how to choose one type of content and reject another. A user interface algorithm then determines how content is arranged on the screen.”(3 p17)

The user will make choices and show preferences and the computer, using algorithms, will use those to refine the process and only show what is likely to be wanted.

This can lead to a “filter bubble “ where the algorithm uses someone’s search history and preferences to deduce what they would like to see and leave out anything they would not. This apparently innocent service tells the user what exists and what is important (top of the search list or most often shared with you) and effectively isolates them from society by surrounding them with only a very selective view, feeding a narrow world view and never allowing in anything that would disagree with it. As users do not know what choices are being made for them, they do not know how they are being manipulated.

A related term is echo chamber, describing how, because of algorithms, your own views are fed back to you and amplified and you rarely see opinions that are different or opposite to yours. This interacts with the human tendency toward confirmation bias , where weto seek out or analyse information in ways that will confirm what we already believe.

Bots is a term based on “robot”,referring to an autonomous programme on a network which can interact with systems or users, often via social media accounts or profiles that purport to be real people but are just machines that send 1000s of posts.They can

have a major influence on opinion and behaviour without the user being aware of it. This could be short term, during an election, or long term, altering a general mood or atmosphere.Bots are able to decide who to send them to for the best effect, using information about users gleaned by algorithms.

Normally, we would know who sent messages. An IP address (Internet Protocol Address) is a unique series of numbers that will identify the computer, tablet or smart phonefrom which information was sent.A server is usually a computer with a large capacity that provides material to a computer network. Once you know the server or source, you know who provided the information. But a proxy server acts as an intermediary, disguising the original source, so material that appears to come from the UK could come from somewhere else.

A sock puppet twitter account is an account apparently in one name but actually operated by someone else. It might be a man pretending to be a young boy or girl, or a politician pretending to speak for their opponent.

As an individual, to ‘troll’ on social media is to make inflammatory comments, designed to create an emotional response. A troll farm is an organisation doing this on a larger scale, for example the Russian Internet Research Agency. (41)

Sources


1 House of Commons Digital, Culture, Media and Sport Committee Disinformation and ‘fake news’: Final Report Eighth Report of Session 2017–19 14 February 2019 https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/1791.pdf
2 https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1630/1630.pdf
House of Commons Digital, Culture, Media and Sport Committee Disinformation and ‘fake news’: Interim Report: Government Response to the Committee’s Fifth Report of Session 2017–19 Fifth Special Report of Session 2017–19 Ordered by the House of Commons to be printed 17 October 2018
3 https://static1.squarespace.com/static/547df270e4b0ba184dfc490e/t/59fb7efc692670f7c69b0c8d/1509654285461/Final.Harmful+Content.+The+Role+of+Internet+Platform+Companies+in+Fighting+Terrorist+Incitement+and+Politically+Motivated+Propaganda.pdf

Harmful Content: The Role of Internet Platform Companies in Fighting Terrorist Incitement and Politically Motivated Disinformation

NYU Stern Center for Business and Human Rights Leonard N. Stern School of Business 44 West 4th Street, Suite 800 New York, NY 10012 +1 212-998-0722 bhr@stern.nyu.edu bhr.stern.nyu.edu © 2017 NY
4 https://www.gov.uk/government/consultations/online-harms-white-paper
5 (b) https://www.washingtonpost.com/technology/2019/01/25/youtube-is-changing-its-algorithms-stop-recommending-conspiracies/?noredirect=on&utm_term=.b270a461d2e3
6 https://fullfact.org/
7 https://www.ofcom.org.uk/__data/assets/pdf_file/0020/31754/Fourth-internet-safety-report.pdf
8 https://theconversation.com/social-media-doesnt-need-new-regulations-to-make-the-internet-safer-gdpr-can-do-the-job-111438
9 https://www.oii.ox.ac.uk/blog/dont-panic-over-fake-news/
10 https://www.pnas.org/content/112/33/E4512
11 https://eu.usatoday.com/story/opinion/2018/09/13/google-big-tech-bias-hurts-democracy-not-just-conservatives-column/1265020002/
12 https://edition.independent.co.uk/editions/uk.co.independent.issue.010419/data/8847821/index.html
13 https://www.legislation.gov.uk/ukpga/1998/42/schedule/1/part/I/chapter/9
14 https://www.ohchr.org/en/issues/freedomopinion/pages/standards.aspx
15 The IACHR is a principal and autonomous organ of the Organization of American States ('OAS') http://www.oas.org/en/iachr/expression/showarticle.asp?artID=26
16 https://www.bureaubrandeis.com/justice-brandeis-on-freedom-of-speech/?lang=en
17 https://www.splcenter.org/fighting-hate
18 https://www.nature.com/articles/d41586-018-07034-4
19 Harmful Content: The Role of Internet Platform Companies in Fighting Terrorist Incitement and Politically Motivated Disinformation

NYU Stern Center for Business and Human Rights November 2017

https://static1.squarespace.com/static/547df270e4b0ba184dfc490e/t/59fb7efc692670f7c69b0c8d/1509654285461/Final.Harmful+Content.+The+Role+of+Internet+Platform+Companies+in+Fighting+Terrorist+Incitement+and+Politically+Motivated+Propaganda.pdf
20 https://www.digitaltrends.com/social-media/social-network-should-governments-moderate/
21 https://ec.europa.eu/digital-single-market/en/news/factsheet-action-plan-against-disinformation
22 https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election
23 https://www.digitaltrends.com/social-media/social-network-should-governments-moderate/
24 https://www.gov.uk/government/speeches/margot-james-speech-on-safer-internet-day
25 https://edition.independent.co.uk/editions/uk.co.independent.issue.080419/data/8859166/index.html
26 (a) Remarks delivered at US Helsinki Commission Briefing “Lies, Bots, and Social Media,” November 29, 2018. https://medium.com/@nina.jankowicz/social-media-self-regulation-has-failed-heres-what-congress-can-do-about-it-5b38b6bf9840
27 https://www.ofcom.org.uk/__data/assets/pdf_file/0014/82112/2015_adults_media_use_and_attitudes_report.pdf
28 https://www.newsguardtech.com/
29 https://berify.com/blog/fake-social-media/
30 https://static1.squarespace.com/static/547df270e4b0ba184dfc490e/t/59fb7efc692670f7c69b0c8d/1509654285461/Final.Harmful+Content.+The+Role+of+Internet+Platform+Companies+in+Fighting+Terrorist+Incitement+and+Politically+Motivated+Propaganda.pdf
31 https://blogs.lse.ac.uk/mediapolicyproject/2018/10/25/media-literacy-what-are-the-challenges-and-how-can-we-move-towards-a-solution/
32 https://www.youtube.com/watch?feature=share&v=JgkvTRz_Alo&app=desktop
33 https://www.dlas.org.uk/
34 https://theconversation.com/government-regulation-of-social-media-would-be-a-cure-far-worse-than-the-disease-92008
35 https://www.theguardian.com/politics/2019/apr/03/grassroots-facebook-brexit-ads-secretly-run-by-staff-of-lynton-crosby-firm
36 https://theconversation.com/who-is-more-powerful-states-or-corporations-99616
37 https://www.politico.eu/article/boris-johnson-brexit-bus/
38 https://uknowit.uwgb.edu/page.php?id=30276
39 https://izea.com/2019/02/06/top-youtube-vloggers/
40 https://www.whoishostingthis.com/resources/credible-sources/
41 https://www.bbc.co.uk/news/technology-43093390