After Analysing the texts, I made some installations that reflect the way I saw and
received the generated stories, they were mainly focused on the way the AI worked
and how I could translate this process from a digital work into a physical one.
With the experiments, it became clear that the gesture played part in those translations
It affected how people looked at the AI’s,
And I became more aware of the way I see them as well, I had first seen them as a tool
that would help me trick or arrange my mind, but at this stage, I see them more as
different characters that have different ways of handling the task in hand
Their individuality became more pressing and I started seeing them as colleagues,
Does this mean that they became human to me? How does their Humanity come across?
and do I trust that they are the ones choosing to write in a specific way?
or is it all up to their algorithms?
I know that the AI’s input is probably a text written by humans but the way they
construct their sentences suggest that they are making a choice in the way they chose
to portray themselves,
What would make AI be seen as beings more than tools?
Would revealing the characteristics of the AI’s makes us perceive them more as a being,
a self..?
I found out that the AI was modelled on the human brain and nervous system,
It processes information the same way our brains do,
Maybe It is close to us more than we would like to admit, and if the neural network is
built like our brains then where is the separation?
I found researchers agreeing on the similarities between us and the AI but with
a difference that is creativity,
Because people agree on the fact that the process is the same, but as humans, we like
to think that creativity comes from different factors and places and not only from the brain,
One of my first trials called Pixelated Vision was a video of a blurry human figure reciting the text generated by one of the AI’s. AI1 was the first AI I wanted to hear it’s sound or see how it would be perceived as a human because working with it was very comfortable and interesting. I guess here you can start seeing already my biases towards AI1, on some level I wanted to meet it, not the person/s that wrote the code but the result of this code that is continuously learning from itself, the internet and recently my stories.
The Pixelated Vision trial was shown at FMI to students and tutors during a feedback session, and even though it was successful in giving more impact and legibility to the AI words, it has become clear to me that giving a human voice to an AI is not as easy as I had hoped since voices have all these cultural indications that can be perceived unintentionally and need to be well thought of, like is it a native English speaker? Or does it have an accent? What tone does it use, and which words are accentuated?
This also relates to a question that I had early on in the research: what is the linguistic impact on the transformation of these stories, given that the starting stories were memories or dreams that were mostly experienced in Arabic, then written in English, having been given to an AI that uses binary code as it’s logic language, let’s say, and are now spoken by a non-native English speaker. Different languages tend to emphasise different parts of a text or assign different visuals to things. //This has not become part of my thesis it remains as a question that every now and then could be seen in the thesis but handling it will take on more research in a different direction than where I’m headed//
This trial also showed that to trust in a story is easier when it’s coming from a human figure, but what does it mean to trust? Trusting that the story is real? Or trusting that the AI is doing its assigned job?