Balancing privacy and ethical rights when detecting harmful behaviors

Over the past few years, companies like Google, Twitter, or Facebook have had to walk a fine line between policing negative behaviors such as hate speech, terrorism, fake news, etc., and supporting basic rights of free speech online. With the increase in aggressive, terrorism, cyberbullying, hate speech, and racist communication, as well as expressions of depression or suicidal thoughts on social media, there are however increased pressures and incentives on these organizations and researchers (like myself) to develop fully or semi-automated methods to detect these negative behaviors. However, as pointed out in my dissertation, balancing the prevention of these harmful behaviors with privacy and ethical rights is a challenge, but this conversation is something that should be had.

In my PhD research, for instance, I aimed at building antisocial behavior detection models that could be incorporated into real-world systems and act as early warning systems for security enforcement organizations or institutions like schools. Although the research respected privacy rights, with these types of detection there is always the concern over the access and analysis of personal information, with concerns that it might be just be a step away from ‘big brother’.

Facebook, with its almost two billion users and use of sophisticated algorithms, privacy and ethical rights are always a question mark. This year, for example, it introduced a semi-automated depression detection algorithm that analyzes and spot patterns of posts that are potential suicidal, and then sends them to the Facebook team for appropriate response [1, 2]. Earlier, this identification process used to be only manual.

It is true that a lot of data is public or that users on certain platforms give away the rights to their data. This makes it easier to develop these detection algorithms, however, it might not be the wish of a Facebook or twitter user to have their data analyzed for purposes that they have not given their consent to. This was for instance observed when it was reported in June 2014 [3] that Facebook was using status updates to manipulate users’ moods and observe how that manipulation translated to their status updates. Even though with Facebook one does agree to their data policy when using their application, people however are often not aware of what is being done with their data and it is hard to draw the line of whether it is alright or if it is infringing on privacy rights. Though, an argument could be made that we must just accept that anything that is posted on social media platforms is not private.

Ethical and privacy issues do arouse real concerns that have an impact on the broad areas of developing and using detection algorithms to detect harmful behaviors. If data is public or accessible, should researchers or organizations still ask for consent to use the data for research or development purposes? If yes, to whom should the consent be acquired from? With so much available data, it is the task of researchers and practitioners to make sure that the data acquired is only used for good. But how do we measure what is good and what might be considered invasive or infringement of free speech? It might be argued that if data and technology are used to prevent crimes or improve quality of life, then it should be allowed. However, under those reasons, organizations can have the excuse or reason to automatically monitor and prevent the posting or sharing of messages that are deemed harmful from their perspective.  With such policing, one might wonder if uprisings like those in Egypt in 2011 which made use of social media to organize, schedule, and spread the uprisings would have been possible [4, 5].

It is hard to see what a good solution to this would be, but perhaps to take advice from Ray Kurzweil [6], maybe the answer to these ethical and privacy concerns is to have a set of standards that are established through a whole social discussion between technologists and society, within and across different societies.

 

References:

[1] Matt Burgess, (6 March 2017), How tech giants are using AI to prevent self-harm and suicide, Wired, http://www.wired.co.uk/article/facebook-safety-self-harm-suicide-ai-instagram, (visited on 2017-04-27).

[1] Natt Garun, (1 March 2017), Facebook leverages artificial intelligence for suicide prevention, The Verge, http://www.theverge.com/2017/3/1/14779120/facebook-suicide-prevention-tool-artificial-intelligence-live-messenger, (visited on 2017-04-27).

[3] Kashmir Hill (28 June 2014), Facebook manipulated 689, 003 users’ emotions for science, Forbes, https://www.forbes.com/sites/kashmirhill/2014/06/28/facebook-manipulated-689003-users-emotions-for-science/#4736d97197c5, (visited on 2017-04-27).

[4] Erick Schonfeld, (16 February 2011), The Egyptian behind #Jan25: “Twitter is a very important tool for protesters”, TechCruch, https://techcrunch.com/2011/02/16/jan25-twitter-egypt/ (visited on 2017-04-29)

[5] Sam Gustin, (11 February 2011), Social media sparked, accelerated Egypt’s revolutionary fire, Wired, https://www.wired.com/2011/02/egypts-revolutionary-fire/ (visited on 2017-04-29)

[6] Bill Joy and Ray Kurzweil, (12 July 2001), Future shock: High technology and the human aspect, Hoover Institution, http://www.hoover.org/research/future-shock-high-technology-and-human-prospect (visited on 2017-04-27).


Also published on Medium.