Ayush Khare's profile

Coded Color | Design Project | 2020


Coded Color


The advancements in AI technology have brought back to the fore several questions about the philosophical foundations of identity and self— predominantly motivated by ethical concerns of automated decision-making. The emerging developments in artificial intelligence make it possible for technologies to be part of identity construction. We are constantly exposed to machines and screens where we invent our digital selves, create avatars, and augment ourselves with digital data. These digital imprints depart from reality, making the awareness of self adjust to digital technologies which blur the boundary between us and our devices. 

In this age of technology, machine-learning algorithms are used every day by people around the world, influencing interactions, opinions, and thoughts and, in turn, impacting society. Human thinking is characterised by categorisation, and it is a built-in byproduct of human-designed Al. These incorporated categorisations can echo and amplify problematic social perceptions, such as racial stereotypes. 
This work looks at the algorithmic bias of artificial intelligence-based image-generating software by digitally constructing a portrait of the self and using it as a tool to engender self-awareness and dive deeper into these biases as a collaborative performance between humans and machines. Automated systems are not inherently neutral as they reflect the priorities, preferences, and prejudices of those who have the power to mould artificial intelligence. 
This gradual change in the nature of how we interact with the world puts forth certain questions-
Who designs the algorithms, and how are they deployed? 
Who decides what level of accuracy is acceptable? 
Who decides which applications of the technology are ethical? 
Are these systems inclusive? 
This series of portraits is generated using a StyleGAN2-Neural-Network-based website www.artbreeder.com. ArtBreeder categorises faces into six different races- ­Arabic, Asian, Black, Indian, Latino-Hispanic, and White- allowing users to alter these features using sliders. By shifting the dedicated sliders within the range set by the algorithm, the user can make a face look "more Asian" or "more Black." However, if the sliders are extended beyond their limit, the system starts falling apart. However, in this breakdown of the system, one gets a closer look at the algorithm. 

For this project, a still image of a face was constructed with the intention to create a self-portrait, then the values of the racial sliders were changed in even, gradual increments.
The process was performed for all the six sliders, which resulted in a total of 540 portraits being generated with the same base image. The resultant portraits were arranged in six typological grids, which show these gradual increments, the breakdown point, and the subsequent falling apart of the system till the face reduces to nothing but a splash of colour.

The outcome was a series of typological grids of self-portraits that proposes an introspection of self-identity in the age of artificial systems, while largely addressing the limitations of today's algorithmic decision-making systems.









Coded
Color











This project was featured in Lines of Sight, 2021. 


Thank you.
Coded Color | Design Project | 2020
Published:

Owner

Coded Color | Design Project | 2020

A typology of self-portraits looking at the algorithmic bias of artificial intelligence-based image-generating software by digitally constructing Read More

Published: