coi_02_headercoi_02_header

Committee of Infrastructure Part 2

Committee of Infrastructure Part 2 interrogates the issue of agency, representation, and intention within the domain of machine learning and artificial intelligence. In accordance with Driverless Government, the project considers how humans and AI systems interact and negotiate with each other in a local government setting. In this virtual council meeting in Los Angeles, human representatives and representatives of AI systems interact and negotiate to engage with issues pertaining to a community. The meeting will be held at city hall five years in the future, June 1, 2022.  Committee of Infrastructure focuses on the digital non-physical dimensions of the council meeting. Specifically, language and moving image.

Find out more about Committee of Infrastructure Part 1 here and to read about the whole project, Driverless Government, click here.

Year
2017–2018

Discipline & Research Area
Design Research, Speculative Design, Civics, Industrial IoT, Artificial Intelligence, and Machine Learning

Responsibilities
Research, Concept Development, World-building, and Data Collection

Ballot Measure: Proposal to Create an Autonomous Intersection and to Remove Traffic Lights

Although a number of different agenda items are to be discussed the project focuses on one issue. The speculative scenario of removing traffic lights to create a fully autonomous intersection is explored in detail. Autonomous cars will have the ability to sense all types of objects through machine vision and proximity detection. These autonomous cars will communicate with different Industrial Internet of Thing (IoT) devices such as smart streetlights and speed sensors to avoid collisions and efficiently move through traffic. These cars will need to detect pedestrians and cyclists. Pedestrians will have their own collection of sensors to communicate with autonomous vehicles and other moving entities. Not only will pedestrians have their presence notated by machine vision detection from the smart streetlights but also embedded sensors on the body. Currently, our smartphones already have sensors such as a magnetometer, GPS, gyroscope, accelerometer, proximity sensor, etc. I imagine that some of these sensors will be embedded in clothing. Lastly, groups that haven’t been previously given agency now have the ability for AI systems and humans to represent their interest. For example, machine vision allows for detection of non-human living beings such as birds, insects, rodents, dogs, etc. These animals are part of the urban environment and their livelihood is also important. Machine vision could be utilized to understand the behavior of animals in the urban environment and protect them from becoming injured or harmed by autonomous cars.

diagrams_02diagrams_02

Figure 1: The diagram illustrates how the intersection will work after the removal of traffic lights at Sunset Blvd. and N. Alvarado St.; proximity sensors, speed sensors, and computer vision will work together to account for and govern the different types of traffic (vehicular, pedestrian, cyclist, and animal).

Stakeholders: AI & Human Representatives

As introduced in Driverless Government data can be used by machine learning to represent constituents, issues, and non-humans.  In Committee of Infrastructure, different stakeholders both human and AI are present at the council meeting. I will focus on three groups that advocate for different issues pertaining to the motivations of their agencies. 

The first group represented is the LA Department of Transportation. Their focus is on optimizing the intersection to work as quickly and efficiently as possible. Autonomous cars will become central to traffic conditions in the future. Therefore, traffic engineers and AI systems will prioritize personal vehicles as the most important factor in traffic optimization. 

The second group is LA Street Smarts which advocates for pedestrian safety with a specific focus on children safety. 

Lastly, People for the Ethical Treatment of Animals (PETA) represents the interest of animals in the urban environment.

stakeholder_chart-10stakeholder_chart-10

Figure 2: This meeting stakeholder chart for the LA City Council proposes a mix of human advocates, AI representatives, human engineers, and AI experts all negotiating with each other on a ballot measure.

Virtual Council Meeting

In order to create a form for the idea of AI representatives, I started with the language. City council meeting transcripts provided the space to work in. Each agency would construct arguments within the confines of the meeting. I used machine learning to generate the transcript and the arguments of the different groups using the Karpathy char-rnn algorithm.

Figure 3: Human and AI representatives arguing for their respective organizations in the vernacular learned by training the ML algorithm on seminal texts important to the ethos of each organization.

Specifically, I made the transcript by training the neural net on City Council meeting transcripts from February 10, 2017, to March 8, 2017.

For the LA DOT I trained the neural net on two sources:
• LADOT Transportation Impact Study Guidelines / December 2016
• Cesar Chavez’s speeches from the book An Organizer’s Tale / 2008

I decided to include speeches of Cesar Chavez to help give a voice to the organization in order model the labor union of engineers that will be performing incredibly complex work both in the engineering problem space but also in the politics of labor.

In order to represent the interests of the pedestrian I trained the neural net on these two sources:
• Jan Gehl’s Cities for People / 2010
• Pedestrian Safety: A Road Safety Manual for Decision Makers and Practitioners / World Health Organization

Finally, to represent P.E.T.A. I trained the char-rnn model on the book that radicalized the founder of P.E.T.A., Ingrid Newkirk:
•  Peter Singer’s Animal Liberation (2009).

Machine Vision

As a final illustration of the argument between the three stakeholders present at the meeting, I created a set of videos. Each organization uses the same data set and changes how it is used to promote their respective agendas. The videos are used to argue for what is important for each organization's agenda. Object classification and proximity sensors are used to create the different forms of analysis shown in the videos. LADOT is only considered about traffic optimization. LA Street Smarts is only interested in displaying pedestrian injury risk factors while crossing the street. PETA is using object classification to bring to the attention all the animals present in the urban environment. 

The videos also display how machine learning can be incredibly efficient and at times it makes mistakes that a human would rarely make. In effect its extreme sensitivity to detect all stimuli creates errors. It is important to consider that these systems are imperfect and are subject to human bias. Although each organization has similar footage their motivations are revealed by how they classify and analyze the data to their own benefit.

Figure 4: P.E.T.A.’s image classification algorithm over optimizes its computer vision analysis, leading to an incorrect classification of vehicles and people as animals.

Figure 5: Los Angeles Street Smarts image classification and misclassification; computer vision’s edge detection is easily fooled by shadows.


At Ars Electronica

I had the privilege of exhibiting Committee of Infrastructure as part of a group show at Ars Electronica in Linz, Austria in September 2017.

Photos courtesy of Hao Zhang and Kiana Bahramian

Reflection & Development

This project helped me consider the implications of the growing pervasiveness of A.I. and machine learning. I believe that a new set of principles need to be created to deal with this technology as its impact continues to grow and influence our day to day interactions. There have been ideas amongst tech leaders about how to deal with A.I.; Bill Gates has proposed taxing robots that replace human jobs. I believe that is critical to continue to develop a set of guidelines and principles that consider the cultural and humane contexts of these systems. Creating a set of ethical guidelines is just as valuable as the code used to make machine learning and AI work.

For my own work, imagining a way to talk about AI by borrowing from the government was a valuable way to create a dialogue about the ethics of AI systems.

Data Provenance
Explicitly stating what the data is and what is missing could provide a working document that could help disclose bias and help computer scientists improve the algorithms to include a varied set of data. Also allowing the public insight into how these systems are made.

Notes

1. https://atap.google.com/jacquard/
2. https://medium.com/after-us/towards-a-poetics-of-artificial-superintelligence-ebff11d2d249#.p5jeoa1gp
3. http://www.curbed.com/2016/9/7/12840398/chicago-smart-city-urban-planning-array-of-things
4. http://www.e-flux.com/journal/31/68189/an-internet-of-things/

© Jason Wong. All Rights Reserved.