Unit 2: Future of the Mind – Telepathy context / research / my work

One of the things that captivated me following the initial group ideation session we had around Futures, was the notion that technology could progress such that not only human-computer mental interfacing was possible, but human-human, i.e. telepathy (the communication of ideas/thoughts by means other than the senses).

The idea of telepathy first caught on in western culture in the 19th century, following on from the spread of spiritualism (communing with the spirit realm/the dead) and animal magnetism/mesmerism (whereby healing can occur via induced trances and hypnotism). These pseudosciences caught on in response to the fantastical advances in science that were making the world at once more understood and more mysterious. Why should we only trust our senses if there are microscopic cells (with cell theory – that we are made up of cells – only being formulated in 1839), and if time can be relative (theory of relativity in 1905).

Hilma af Klint was interested in spiritualism, and can also be credited with the first abstract art – exploring automatic drawing in attempts to visualise this non-visible reality. I encountered some of her sketches in the Moderna Museet in Stockholm, and enjoyed how naive and free they seemed, some of which are below. The nesting and interaction of colourful organic forms, and looping, swooping lines also appeal.

Magicians then, using biological cues, performed ‘thought reading’ stunts, which still continue to this day. Indeed, much of the trends seen in the 19th century reared their heads again during the New Age in the 1970s, e.g. the idea of ‘channelling’ spirits or the collective unconscious via trances to gain new information. This took a strange turn as documented in the book and film adaptation of The Men Who Stare at Goats, with the American military hoping to harness the power of psychic agents for intelligence-gathering (and also, bizarrely, attempting to harm or kill psychically)

This idea of psychic ability being used as a weapon or military advantage is also explored in fiction, e.g. in Star Trek with the Vulcan mind meld appearing in it’s first season, and then later too with empaths such as Betazoids and the hive mind of the Borg. The notion of ‘hacking’ or mind control of another by means of such interfacing is a central theme of fiction such as Ghost in the Shell. This relies too on the notion of interconnected technological knowledge and AI systems within a ‘cyberspace’ – a concept conceived before the internet by William Gibson in his Neuromancer novel, but now a term used to refer to it.

But for me, I am interested in the consequences of such technology. If we were able to communicate telepathically, would this make language redundant? Would we lose language, particularly in our more intimate relationships? This could be a means in which telepathy could be a force for good, and answering a central human desire to be understood – enabling us to fully intimately understand and know our romantic partners and significant others. But what might be lost from our current relationships, and would this be a destructive or positive change?

There is a unique mode of communication already in existence between romantic partners, a secret language you only use within that context – formed of in-jokes, pet names, and particular phrases or patterns of speech that you build together. In a world where you could communicate without language, this would be defunct. I explored some ideas for how we could record these future dead languages, to house in future museums.

Below is a work I encountered in an exhibition at the Whitechapel gallery, curated to explore a post-language society (here conceived as a post-apocalyptic eventuality). The work below explores communication through a personal visual language, which are curious, but I’m unsure if they truly communicate (though perhaps they are recognisable to a native spanish speaker!) – it’s intriguing to see here again colourful, somewhat organic forms appearing.

Returning to my idea… The thought that these languages would be shared, which previously have been intimate and private between two people, is intriguing but one which made me feel unsure I could in good conscience ask people to share with me openly. This is interesting in itself, that I would hesitate to do so. That making open and shareable something entirely private is similar to this notion of sharing our inner most thoughts with others via telepathy. It is certainly an uneasy future being imagined.

What then could be more private than our own sense of self, our inner eye. What might it mean for our perceptions of self, if we can be fully aware of how others perceive us, and view ourselves through their eyes? Would this exacerbate or destroy the current situation of ‘selfie culture’ – whereby we feel pressured to curate our online image to the extent that our bodies, our lives appear perfectly manicured (whether doctored through photoshop or filters or not), and the comparison of ourselves to the online image of others is damaging to our mental health. The obsession with picturing ourselves in any and all situations can be seen as narcissistic and superficial, but it reveals our humanity too. Our desire to understand ourselves, to fit in and be understood by others. To mark our place in the world, and confirm yes I do exist. But this conflict between our inner world and how we appear externally is hard to process – and body dysmorphia and eating disorders are on the rise.

Experiment with a selfie feedback loop using the webcam on my computer and my phone

I very rarely take selfies of myself. My profile picture for several years on Facebook has me in sunglasses that obscure much of my face. This is not out of a particular desire to be unknown, or undocumented. I admittedly do see flaws in my appearance, and suffer that horror when you accidentally have the camera facing the wrong way when you turn it on on your phone. So I never spontaneously feel the urge to do so – to take a selfie feels contrived for me, though I understand it can be different for others! I was interested then to explore this possibility, of the complete knowledge of my appearance to others, and engage in a process that exposed me more than I would usually be comfortable. To invade my own privacy.

To do so, I recorded my appearance in a typical evening at home with my fiance. By attaching a head-mounted GoPro to him, I hoped to approximate his point of view and gain this notion of the self-image through someone else’s eyes. Below, I edited together only those moments when I was in frame. It provides a disjointed account of the time spent making dinner, and the conversation appears surreal.

The camera angle feels like I am floating above myself – as though in an outer-body-experience – which I suppose this is! It is disorienting how it jerks around according to his head movements.

Future Self-Portrait

It is uncomfortable seeing so much of myself in a video, to see less than flattering angles and lighting. Much like the confusion when hearing your voice on a recording (how it never sounds quite as you hear it in your own head), it seemed strange to see my idiosyncrasies – mannerisms and facial expressions – played out in front of me. I feel vulnerable in particular when seeing how my eyes remain closed sometimes when talking – something I am unaware of doing in the moment. Also – I seem so short! In all I think this was a successful experiment.

Unit 2: Future of the Mind – AI

For our latest brief, we are to explore the Future of one of the following: the Mind, Body, Work, Play, Home or Travel. I have chosen to pursue the Future of the Mind, since this has close relations to the Philosophy of Mind I studied during my degree, and the consumer behaviour I have researched in my career. One avenue I wanted to explore here related to the notion that a future mind might be the artificial intelligence (AI) that are becoming ever more sophisticated in the modern day.

Predictive text is probably the most common AI interaction we have – and it gets more and more sophisticated as technology progresses. Starting with ‘autocorrecting’ typing on numeric keypads using word disambiguation, the latest messaging keyboard function also predicts likely words, emojis or actions you may want to make based on the context of your and your messaging partner(s) recent messages, e.g. it will set up a shortcut to add a diary entry if a time/date is suggested for a meet up, or give a cake emoji suggestion if you are wishing someone a happy birthday. In fact, the system now does not even require that you first enter any words yourself. Relying on a body of knowledge that is based on the context of all user interactions, as well as your own, it can predict to a certain extent likely sentences and phrases that might come up.

XKCD webcomic https://xkcd.com/1427/ – the predictive text model has not yet learned the context of these movie references… yet.

Since the increasing sophistication of AI is a trend often touted as leading us to fully sentient/self-aware AI in the future (who might then be considered persons/minds in their own right), I was interested to explore generative art, making use of the models to create works that might reveal the state of these proto-minds.

First I experimented by continually pressing the left hand suggested text until I had a complete text (below) and then sent this to my boyfriend. I especially like how the model has fallen into a loop on the phrase ‘and I hope you have a good day’. That phrase is a cursory sort of nicety that serves more as a signal that a conversation is coming to a close than indication of a genuine feeling – i.e. i hope you have a good day since I don’t anticipate us interacting again for the remainder of it. The fact that that sign-off gets repeated undermines that function, so stripping it of even this meaning. The model is also failing to give us a full sentence, the breaking of syntactic convention makes this seem more like a meditative poem or song lyric, and the repetition of that phrase reinforces that. It’s interesting too that the arrangement of the repeated words appears to create diagonals across the block of text (wanna, day, you, have, good, day), almost like the creation of a pattern.

And I hope you have a good day – the first predictive text generated using my phone

It’s interesting too thinking about what the ‘I’ in this poem might signify – is it me? Is the model adopting my voice? Or does the ‘I’ refer instead to the predictive model itself?

Later in the day, my boyfriend and I experimented with sending such predictive messages back and forth between our phones, to see how this context might affect the conversation between the predictive models. The text in the green bubbles has originated from my phone, and the white from his.

It’s intriguing that here again, now from my boyfriend’s predictive model, we see a repeated phrase ‘and then I will have to go to pick up the kids tomorrow’. We do not have any kids, so this context must come from this phrase being observed from other messaging users – it’s interesting though that it first appeared in my text ‘let us get the kids’ and that my boyfriend’s one then repeated this. Generally the conversation between the models seemed to mostly orientate around arranging a meeting time and planning for tomorrow. It’s interesting too that they both made some slight use of emoji, though could not be signifying emotion in themselves.

Intriguingly, many of these messages are related to future events/future planning.