Trust
Similar to us in structure but still somewhat different.
I find myself intrigued by them and trusting them to generate an added meaning to my stories.
But what is trust? And when I trust AI am I trusting the algorithm? Or do I trust that it has similar ethics and logic, trusting its humanity?

To trust in an algorithm, does it mean we need to know that it’s doing what it is intended to do with no mistakes? Or maybe just knowing how it works.

Algorithms are being trained by the British Metropolitan Police to pick
up images of child pornography online. At the moment, they are getting very confused by images of sand dunes. The contours of the dunes seem to correspond to shapes the algorithms pick up as curvaceous, naked body parts.

For a short period in the summer of 2018, if you Googled “idiot,” the first image that appeared was one of Donald Trump. Users on the website Reddit understood the powerful
position their forum has on the internet and how they could exploit it (by knowing how google algorithms works in showing the most upvoted//linked//posts first) By getting people to upvote a post consisting of the word “idiot” next to an image of Trump, they fooled the algorithm into assigning top ranking to that post for any idiot-searchers.

LabSix, an independent, student-run AI research group composed of MIT graduates and undergraduates, managed to confuse image-recognition algorithms into thinking that a model of a turtle was a gun. The way they tricked the algorithm was by layering a texture on top of the turtle that to the human eye appeared to be turtle shell and skin but was cleverly built out of images of rifles. The images of the rifle were gradually changed over and over again until a human couldn’t see the rifle anymore. The computer, however, still discerned the information about the rifle even when it had been perturbed and that information
ranked higher in its attempts to classify the object than the turtle on which it was printed.

Researchers at Google went one step further and created images that were so interesting
to an algorithm that it would ignore whatever else was in the picture, exploiting the fact that algorithms prioritize pixels they regard as important to classifying the image.
The Google team created psychedelic patches of colour that totally took over and hijacked the algorithm so that, while it had generally always been able to recognize a picture of a banana when the psychedelic patch was introduced, any bananas disappeared from its sight.

Humans are not as easily fooled by these tricks, but that’s not to say we’re immune from similar effects. Magicians rely on our brain’s tendency to be distracted by one thing in our visual field and to completely overlook something else they’re doing at the same time.


This is where the power of an algorithm that can continue to learn, mutate, and adapt to new data comes into its own. Machine learning has opened the prospect of algorithms that change and mature as we do.
13
14
12