Wednesday, September 9, 2015

Seminar 4 Position Statement: Prof Ben O'Loughlin, Royal Holloway, Univ. of London

The Right Not to Know

Royal Holloway, Univ. of London
Andrew Hoskins and I argue that the convergence of two shifts, one in the connective politics of conflict and catastrophe, the other in the connectivity of self, together generate the impossibility of claiming ignorance. A few weeks ago an open letter from a Syrian village was posted on Twitter in English: "[The] Assad regime is killing us and destroying our city. You are all responsible for our death. Your silence is keeping him strong”. Crisis mapping, satellite surveillance, citizen and professional news reporting, NGO reporting: the Syrian villagers assume we see and we know. Surveillance and sousveillance are conditions of this impossibility of ignorance. This impossibility is a defining challenge of the digital age partly because it manifests itself across the levels of real world politics, culture, technology and self – an entire ecology of knowing – that are often seen and treated as disconnected (and thus hived off for abstract enquiry). This is not the same as the right to be forgotten or the right to connect. A response requires something greater than the sum of these parts, hence we are asking what would a right not to know – notably a right that was not required of earlier media ecologies – look like?

Tuesday, September 8, 2015

Seminar 4 Position Statement Zach Blas, Goldsmiths, Univ. of London



Informatic Opacity
 By Zach Blas

Artist and writer whose work engages technology, queerness, and politics. 
Department of Visual Cultures, Goldsmiths, Univ. of London


Confronting the rapidly increasing, worldwide reliance on biometric technologies to surveil, manage, and police human beings, queer, feminist, and anti-racist practices of opacity have gained in popularity as a means of political struggle against surveillance and capture technologies in the 21st century. Utilizing biometric facial recognition as a paradigmatic example, I argue that today’s surveillance requires persons to be informatically visible in order to control -or capture - them, and such informatic visibility relies upon the production of technical standardizations of identification to operate globally, which most vehemently impact non-normative, minoritarian populations. Thus, as biometric technologies turn exposures of the face into a site of governmentality, activists and artists strive to make the face biometrically opaque and refuse the political recognition biometrics promises through acts of masking, escape, and imperceptibility.

I broadly theorize such refusals to visually cohere to digital surveillance and capture technologies’ gaze as “informatic opacity,” an aesthetic-political theory and practice of anti-normativity at a global, technical scale whose goal is maintaining the autonomous determination of alterity and difference by evading the quantification, standardization, and regulation of identity imposed by biometrics and the state.

During this talk, I will also discuss two art projects I have recently completed: Facial Weaponization Suite, a series of masks and public actions, and Face Cages, a critical, dystopic installation that investigates the abstract violence of biometric facial diagramming and analysis. 

Seminar 4 Position Statement: Dr Ben Worthy, Birkbeck, Univ. of London



 By Dr Ben Worthy, Birkbeck, Univ. of London
 Through what media, cultural, activist and commercial forms do people learn about transparency issues? What are the dominant messages on transparency?
There are a number of ways through which the public learn about transparency issues. First, transparency often appears in the media and political discourse as a solution to crises. Scandals, exposes or shocks, from political corruption to financial crashes, either create a demand for greater openness or the lack of it is defined as a cause (Roberts 2012). Increased openness is also frequently offered by governments or organisations as a symbol of their ‘difference’ from predecessors/competitors or their commitment to certain values and ways of working. Second, as Fenster (2015) points out, transparency has also ‘captured the popular imagination’ through narratives about whistleblowing or ‘heroic leaks’ such as the MPs’ Expenses or Snowden. Third, consequently, transparency over the last decade has entrenched itself within political discourse as a kind of universal good that is both an instrumental means to a number of positive outcomes (such as improved trust or accountability) and an end in itself (Heald 2006: Meijer 2013). It is, moreover, an idea that is universally supported across the political spectrum (Birchall 2014). The existence of mechanisms such as Freedom of Information laws provide daily reminders in the media of the role and value of openness.
Underneath this universal veneer, transparency can be many things. Indeed, it is in some senses an ‘empty signifier’ that can be ‘filled’ by very different interpretations or emphasis (Stubbs and Snell 2014). Below are just three examples:
·      Transparency as Political Empowerment: it is a highly politicised instrument of empowerment, embodying different democratic norms and values (Fenster 2012)
·      Transparency as Policy Solution: it can be a ‘dramatically satisfying answer to every crisis and question about the state’ (Fenster 2015).
·      Transparency as Economic Improvement: it is a means of increasing efficiency and even wealth, connected to a ‘consumer-citizen’ idea of delivery and performance measurement.

Its dominant message is fundamentally contested. There is a constant, highly politicised struggle to define which of these (or many other) meanings transparency has and what it can and should do (Yu and Harlan 2012: Fenster 2015). For governments it is often imbued with very particular, often neo-liberal, conceptions of state-society relations. More radical conceptions see it as a weapon against exactly these ideas (Birchall 2014). The question of what sort of transparency is created, of who and by who exposes the complex politics underneath (Berliner 2014). Julian Assange and David Cameron are both vocal supporters of transparency but it is unlikely they agree on what it means and who it should effect.  On a symbolic level, transparency can be a radical weapon of empowerment, a tool of modernisation and a means of demonstrating an organisation is more ethical, more honest or more trustworthy.
There is rarely a clear distinction on what transparency is produced by e.g. is it through an FOI, a leak or whistleblowing? Transparency can be seen as a continuum or spectrum with government press releases at one end and Snowden at the other. It is most often government that delineates what it sees as the legal ‘limits’ around openness on the borders, for example, of FOI laws or secrecy legislation. It frames the narrative over where transparency begins and ends.
Yet the exact limitations are constantly moving. Disclosures through leaks, semi-authorised disclosures and ‘plants’, innovations such as Open Data, and ‘radical’ actions like Wikileaks can all kick start transparency and gradually shift where the border lies between ‘open’ and ‘closed’ or ‘legal’ and ‘illegal’ (Posen 2013). Meaning is greatly complicated by the closing off of certain issues, not least the transparency of citizens through government surveillance, a rarely mentioned aspect of the wider transparency debate that is frequently disconnected or separated (Birchall 2014).

Do people care about liberal transparency (holding power-holders to account)? Do people care about ubiquitous transparency (where their own private lives are open for inspection)?
What evidence can be gleaned of how the public view transparency points to rather mixed and nuanced understanding. There is a broad public awareness of some formal means of transparency e.g. Freedom of Information laws and a general (if vague) support for them. In terms of leaks and ‘radical transparency’ such as Snowden and WikiLeaks public opinion is unclear-in certain contexts, while there exists a powerful supportive ‘folklore’ about whistleblowing, expectations and concerns over, for example, national security can divide opinion as to the ethics and effects (Roberts 2012a: Fenster 2012). Some fascinating experiments indicate that the public support and are reassured by the presence of transparency mechanisms but have little desire to use them, instead preferring to rely on other citizens to operate them and unleash the benefits (see De Fine Licht 2012 and De Licht et al 2014).
Similarly with privacy, there is an awareness of rights and a sense that it is an important issue-surveys register a continual hum of concern over confidential information, data protection and privacy.  But this does not appear to generate a general concern or ‘push’ for particular things to happen. Instead there appears to be reactions to sudden ‘punctuated’ privacy ‘scandals’ e.g. as seen in the UK over care of data and the security of personal health information. In some ways, public opinion probably reflects the nuance of an issue that does not really have an obvious or permanent solution, the basis of which are continually challenged and outstripped by technology.

Is there a disconnect between transparency representations and public opinion, and if so, how it should be addressed? Do we have a healthy public debate on transparency issues? What would improve its quality?
There are numerous disconnects over public opinion and transparency
·      Context is key: Although transparency is seen as a ‘good thing’, the battle over what it means and its limits undoubtedly raise a series of competing and contradictory issues. Transparency overlaps with the ethics of leaks, privacy and national security. The view held by the public of any kind of transparency at any one time is highly context dependent. A leaker of classified information like Snowden may be viewed very differently than the anonymous leaker of MPs’ expenses.

·      Flawed assumptions: The underlying idea of transparency, that information empowers citizens as rational calculators, is misplaced, though politicians continue to press it. All receivers of information have biases, heuristics and assumptions that shape ideas and views and may interrupt the flow or change the meaning of disclosed information. All transparency systems and instruments are shaped by the environment in which they are created and their political context (Meijer 2013).

·      Competing visions and meanings: The debate over transparency is ongoing but may further complicate discussion rather than resolve it as different sides pull against each other. Governments seek a de-politicised (or re-directed) transparency focused on efficiency or improving services while activists seek greater openness of different parts of the state (and increasingly the private sector). The different language and aims may push discussion in divergent directions.

Select Bibliography

Berliner, Daniel. (2014). ‘The Political Origins of Transparency’. The Journal of Politics, 76(2): 479-491

Birchall, C. (2014). Radical Transparency?. Cultural Studies↔ Critical Methodologies, 14(1), 77-88.

De Fine Licht, J., Naurin, D., Esaiasson, P., & Gilljam, M. (2014). When does transparency generate legitimacy? Experimenting on a contextbound relationship. Governance, 27(1), 111-134.

Fenster, M. (2015). ‘Transparency in Search of a Theory’. European Journal of Social Theory, 18(2), 150-167.

Heald, D. (2006). ‘Transparency as an Instrumental Value’.

Meijer, A. (2013). ‘Understanding the complex dynamics of transparency’. Public Administration Review, 73(3), 429-439

Stubbs, Rhys and Snell, Rick, (2014) ‘Pluralism in FOI Law Reform: Comparative Analysis of China, Mexico and India’. The University of Tasmania Law Review Vol. 33, No.1, 2014, 141-164.

Yu, Harlan and Robinson, David G., (2012) ‘The New Ambiguity of ‘Open Government’ (February 28, 2012). 59 UCLA L. Rev. Disc. 178

Roberts, Alasdair S., (2012) ‘Transparency in Troubled Times’. Tenth World Conference of the International Ombudsman Institute, November 2012; Suffolk University Law School Research Paper 12-35.


Seminar 4 Position Statement: Dr Andrew McStay, Bangor Univ.


The Case of Empathic Media in Advertising
By Dr. Andrew McStay, Bangor Univ

My take on this seminar topic stems from what I term ‘empathic media’. Developed in my recent book Privacy and Philosophy: New Media and Affective Protocol (2014), this odd sounding expression has less to do with sympathy, but technologies able to interpret people and their environments by means of text, images, facial recognition, speech, behaviour, gesture, skin responses, respiration and bodily movement. Each of these involves mediation of emotional transparency by means of arousal, social-semiotic practices and behaviour.

This is a relatively new dimension to the transparency surveillance question that will become more pronounced as smart cities discourses are increasingly realised. For a tangible example, this year M&C Saatchi has tested advertising billboards with hidden Microsoft Kinect cameras that read viewers’ emotions and react according to whether a person’s facial expression is happy, sad or neutral. This is the first example of artificial intelligence (albeit a limited sort) being used in urban environments.

At this stage very little data is being collected but this information will be very useful to the media owners so to chart performance of the media sites across cities. This information will surely be irresistible to authorities.

Bioreactive empathy was also evident at Wimbledon this year. In partnership with Wimbledon, Maido and Lightwave, Mindshare launched a campaign called Feel Wimbledon. This captured moods and emotions of the Wimbledon crowd by means of heart rate variability, localized audio, motion and skin temperature of 20 fans in the crowd, via sensor-equipped wristbands. This allowed Jaguar to create ‘living ads’ by means of visualising fluctuating emotions.

This provides us some foresight into the implications of wearables. Feel Wimbledon received full consent for participants, but if (and I admit it’s a big if) wearables become embedded in everyday life, emotionally sensitive empathic media will grant advertising greater insight into our emotions through how we speak to our mobile devices, more granular facial recognition and emotional insights derived from our heart rates, respiration patterns and how our skin responds to stimuli. A bit weird I know, but we’re already a good part of the way there.

Most notably with the M&C Saatchi campaign, the artificial intelligence part comes in as soft biometric feedback from viewers provides data by which ads improve themselves (for example by using elements that win smiles rather than grimaces). 

As it stands, empathic media do not require personal information. This fact means that data can be more easily collected, processed and shared. Although there are right and proper questions to be asked about re-identification and whether it can truly be separated from personally-identifiable information, the industry is betting big on the fact that it can be bundled as ‘non-spooky’ because it is legally compliant. This presents an interesting conundrum because data protection and privacy concerns are typically based on the principle of identification, not intimacy.



Seminar 4 Position Statement: Simona Levi, X.net



X-Net’s Fight against Corruption through Transparency Tools
By Simona Levi

Xnet  (ex-EXGAE) is a group of activists who have worked since 2008 in different fields related to online democracy and the creation of mechanisms for organised citizen participation and to control the seats of power and institutions.  We defend a free and neutral Internet; the free circulation of culture, knowledge and information; citizen journalism and the right to know, to report and to be informed; the legal, technical and communications struggle against corruption and technopolitics, understood as the practice of networking and taking action for empowerment, for justice and for social transformation.
Transparency and open access to information is the best strategy against corruption. We are driving the initiatives that are behind the leaks of one of the major corruption cases that are shaking Spanish politics today. These cases have emerged thanks to the fundamental participation of organised citizens.

The 15MpaRato project is a citizen device used to file the initial lawsuit and that drives the Bankia Case in the National Courts, uncovering the scams of Bankia's listing on the stock exchange, the preferred shares and the 'Black' Credit Cards. Xnet has built a Mailbox for Citizen Leaks Against Corruption, responsible for the discovery Blesa's Emails and the Black Credit Cards scandal and has recently launched a news blog with selected information that citizens have securely and anonymously provided through the Mailbox. A collaborative space for open source journalism in the struggle for transparency and against corruption.

In a context where governments and institutions are accomplices of the abuse imposed, the most important part of Xnet's work is citizen empowerment; so people can be the active actors of change.

Seminar 4 Position Statement: Dr Simon Rice, Information Commissioner’s Office



Obtaining Information
By Dr Simon Rice
Group Manager (Technology)

Being transparent about how you process personal data is a legal requirement under the Data Protection Act 1998. The processing of personal data must be fair and fairness generally requires you to be transparent. If you are asking for an individual’s consent then providing clear and comprehensive information is also a fundamental part of ensuring that the consent is valid.

Transparency is important when individuals have a choice about whether they wish for their personal data to be processed in a particular manner but also in situations where that choice may be limited.

The ICO is also the regulator of the Freedom of Information Act 2000, the Environmental Information Regulations 2004 and the Re-use of Public Sector Information Regulations 2015 each of which require public sector organisations to make certain information available either proactively or on request (unless a valid exemption exists).

Whilst these laws give individuals strong opportunities to obtain information this doesn’t lessen the need (or legal requirement) for organisations to proactively informand educate about their data processing activities especially when these impact on the individual’s private life.





Seminar 4 Position Statement: Paul Bradshaw, Birmingham City



Mediating Transparency through Data Journalism and Data Visualisation
Associate Prof., Birmingham City Univ.

Data journalism and data visualisation are essential techniques in mediating transparency initiatives. Data visualisation and interactivity has proven to be a particularly successful way to bring previously dry topics to a much wider audience, while data journalism allows journalists to make transparency data intelligible to the wider public in the first place. However, with both there is the danger of data ‘churnalism’ and misrepresentation. Transparency initiatives themselves are a form of power which needs to be held to account: the selection and collection of data is itself an exercise of power. And data visualisation can give information a patina of credibility which the underlying data does not always possess. Journalists not only need to be more able to interpret and communicate data – they need to be more critical in interpreting the same work when done by others.

Seminar 4 Position Statement: Dr. Madeline Carr, Aberystwyth University



Invisible Challenges of the Information Age
International Politics and the Cyber Dimension,
Aberystwyth University

One of the problems with generating an engaged public debate about data security, privacy and transparency is that the practice of collecting and sharing data is largely invisible and inaccessible to most people. Debate tends to focus therefore, on key events; public ‘leaks’ like Wikileaks and Edward Snowden, large-scale data breaches like the Target credentials theft and the Sony hack, and the everyday experience of search queries generating correlated advertising results in our browsers and social media. 

 These tangible and somewhat more visible examples can facilitate discussion but they fail to engage with some of the significant technological, political and commercial changes that face us in the very near future. Migration to IPv6 and the related development of the ‘Internet of Things’ both raise serious questions about informed consent, about accountability and about the legitimate control of personal data. 

Ensuring that civil society interests and human rights are protected as we transition to this next phase of the Information Age is essential but it is not being driven by an informed public debate. The focus of this workshop, which seeks to explore the cultural resources that inform public approaches to norms and practices of surveillance is an important contribution to the DATA-PSST project.


Seminar 4 Position Statement: Dr Gilad L. Rosner, Internet of Things Privacy Forum




The Distraction of Transparency & Consent when Understanding Privacy
By Dr Gilad L. Rosner
Founder, Internet of Things Privacy Forum
Visiting Researcher, Horizon Digital Economy Research Institute
Member, Cabinet Office Privacy and Consumer Advocacy Group
bit.ly/grosner  |  gilad@giladrosner.com
@GiladRosner  |  @IoTPrivacyForum

 
Transparency does not achieve privacy goals. It is part of what are
commonly known as 'fair information principles.' As the name states,
these principles are about achieving fairness, not privacy. Moreover,
the principle of transparency has its roots in the concept of autonomy -
if you do not know how data about you is being collected or used, then
you cannot be a fully autonomous human being. However, there is a
tremendous emphasis on transparency and consent as vital goals in the
treatment of personal data; goals that serve the aims of privacy. I
argue that transparency and consent are de minimus considerations that
do not serve privacy goals. In their absence, data collectors are merely
spying on people, but in their presence, they only serve to inform but
not protect. Commercial entities are very proud of their transparency
efforts, but the danger is that ballyhooing transparency (and consent)
distracts from more important privacy considerations: user control,
rights, informational self-determination, the bias towards opt-out, and
others.

Seminar 4 Position Statement: Dr. Yuwei Lin, University for the Creative Arts




Grassroots Online Activism for shaping the public’s views on privacy and surveillance
By Dr. Yuwei Lin
University for the Creative Arts

My long term research interests in free/open source software communities have allowed me to observe grassroots community-driven approaches to addressing contemporary issues such as privacy, surveillance, censorship and transparency. A number of free/open source software have been developed for protecting user anonymity on the Internet. A popular tool for journalists and activists is 'Tor' - “free software and an open network that protects users privacy and helps users defend against traffic analysis, a form of network surveillance that threatens personal freedom and privacy, confidential business activities and relationships, and state security”. The success of the Tor project relies on a network of volunteers who operate servers to form a series of virtual tunnels for users around the world to access. Croeser (2012) has coined the term 'the digital liberties movement' to describe how communities and social movements use the Internet to build a sense of a collective identity and a master frame that ties together issues around online censorship and surveillance, free/libre and open source software, and intellectual property. Activists perform various activities on the Internet including blogging, campaigning, distributing messages within or beyond their social circles, strategically gathering collective force to tackle landmark issues at a specific time. Along this line, I am interested in understanding how human actors from different backgrounds (free/libre open source software developers, journalists, NGO workers) work together to tackle their shared concerns (privacy, surveillance) through forming networks of activisms, and how internet activism serves as cultural resources for informing the public’s views on privacy and surveillance.

Reference:
Croeser, Sky (2012). 'Contested technologies: The emergence of the digital liberties movement'. First Monday 17(8). URL: http://firstmonday.org/ojs/index.php/fm/article/view/4162/3282

Seminar 4: Position Statement Dr. Dan McQuillan, Goldsmiths, Univ. of London



Towards Algorithmic Solidarity
Goldsmiths, Univ. of London


1. There is no transparency, only ways of seeing;
2. The new way of seeing is algorithmic and predictive;
3. Machine learning moves us from 'seeing like a state' to 'seeing like
a secret state';
4. Cultural representation of this has hardly begun (it's not 1984);
5. There is as yet no counter-seeing or algorithmic solidarity, only
cryptographic obfuscation.

Seminar 4 Position Statement: Wifak Gueddana, LSE




My user, my data! When my ip was sold ten thousand times
LSE


Saying that the digital economy has taken over the global economy can only mean that the Internet has grown into a big and global market, where people are situated in a one-way  spectrum ranging from internet service providers ISPs (producers) to users (consumers).

Based on that the process or theory that connects what ISPs do (services), how they assess and rate their performance (rhetorics of supply) and the marginal added worth they contribute is being increasingly determined by users’ data (market resource). Also the scale of this digital data exploitation (collection and processing) is bound to be more invasive for people’s rights and privacy. Finally, while much of the industry practices and technology advances in this field are still understudied and obscure, the decisions that are taken today will shape this  industry and our future.

As a way of simplification, let’s start by agreeing that most ISPs are at the same time, advertising companies. I define advertising companies as entities whose core business involves among other things surveilling the activity of their services’ users in order to increase the visibility of that of third parties and help them sell their products. For example, Google was first known by users as a web search company; now we know that Google is in reality an ad company. Similarly to social media companies, such as Facebook, Google processes the data of its services’ users and sells it. From this perspective, how such companies influence standards and technology advances in terms of the social networks and search algorithms’ performance (or quality) is now conditioned by their internal strategy as advertising agent. This was not necessarily the case before.

IPs, browsers and individuals’ browsing history (clicks on links) have been stored by ISPs and websites from the beginning of the web history; nothing is new there. This kind of data was kept in cookies' files and stored in users’ computers not to overload the servers of websites and ISPs. A user requires this kind of information for her personal use, in case she wants to go back to a previous search results or remember some information. This kind of data is also positively rated by websites because it speeds up their performance. The same logic applies in the case of companies producing search and recommendation algorithms. These have designed search solutions, which query users’ recorded information because this process helps speed up, or improve the relevance of search results for the user. This is to say that much of the users’ data that was collected before was for the benefit of the users, i.e. to provide them with options and improve services and features’ design.

A turn is marked when this information is not stored any more on one's hard disk only but also on the servers' of the websites and ISPs. Why? First this is explained by storage capacities, which are now very cheap. Second the nature of this information (typed-in-personal-data), its volume (number of users), and the potential for connectivity (one key type for many datasets) are also very alluring. More importantly, users’ information is increasingly recorded by the servers because it is continuously processed. Most ISPs have now established standards and algorithms to mine, filter, visualise, re-order and categorise their users’ data for sale to third parties – advertising companies or companies in other industries. For example, tracking cookies, clicks and search histories are common surveillance technologies which are now used to compile long-term records of individuals' browsing behaviour. These datasets are mined and filtered by theme; industry and key word, then sold accordingly as an unnamed category – that is without disclosing the personal identity of the owner. Thus, all the rows in a dataset represent IPs, or browsers, or some sort of site ids referring to users who have searched or shopped or clicked on a let’s say a ‘travel’ product. These are put together in a category named ‘travel’ and valued at a certain price. Those ISPs who track names, emails or names may be considered today illegal and their activity restricted.

While we are undoubtedly dealing with a techno-cultural condition of increased and normalized tracking, Law has acted so far as a firewall, trying to keep up with industry standards and technology advances. In 2011, European and US laws have prompted companies who use tracking cookies to take action towards getting the 'informed consent' of users. There will probably be a similar law on search records or all on individuals' records that are collected by ISPs to feed customization and recommendation algorithms. While current advances in digital law bring increased public awareness and protection for users, it is somewhat too late to stop the massive scale of digital data exploitation and its possible negative repercussion on users’ freedom.

Indeed, to derive sensible information about individuals and groups from huge datasets that do not necessarily talk to each other, it is to be expected that aggregation procedures and reductionism will be brought to unprecedented scales. In other words big data analytics amplifies the scope and extent of statistical analysis and the probabilistic laws of large numbers. Such a situation means that people will increasingly be less unique and more quantifiable. They will also slowly lose control over their data, which will continue to circulate between datasets unbounded. Data are regularly and systematically re-purposed to be sold.
Their transactional value, as defined by potential for re-combinability and circulation, overweighs their semantics or utility. Such a situation can lead to cases where the transactional value of data about someone or something will be more significant than the ethical value of that someone or something. In a society where the question of ethics is underscored in the political discourse, I think that social sciences academics and scholars have a responsibility in bringing forward a revised framework on digital ethics.