Does Elon’s Neuralink have our minds fooled?

Anna Ptasińska
4 min readJun 7, 2023

--

Synthetic telepathy, neuro-imaging, brain-reading, surveillance… whatever you call it, the age of AI is here. And the roads we take, all too often dictated by the algorithms of these new technologies, can be fun and spark our curious minds, yet they can remain very dark.

On Monday, Reuters reported that Musk’s startup Neuralink is now worth $5 billion through private stock trades. America’s NHI has given the sign-off for consumer consumption… in about 10 years. For now, the product is in the human-testing phase. Even though there are still some legalities surrounding animal rights, this was overlooked, along with exclusive algorithmic biases…

Love him or hate him, Elon Musk has propelled the tech industry into unmarked territory. Dark, unseen and disseminating a new role of information — some parts left unchecked — can leave a problematic mark on humanity.

The ethics behind Neuralink

Synthetic telepathy is not new in many ways. Back in 2008, army funds were disseminated to provide a sort of artificial telepathy through brain-to-computer interfaces. There is no denying that such inventions can help humankind in medicine and science. Imagine a person could use a prosthetic limb and use it through brain activity, even sparking the brain for the prosthetic to feel touch. An excellent achievement for helping people, especially those with neuro-disorders or those who have experienced severe trauma. Neuralink aims to help with paralysis, blindness, mental illness, and diseases like dementia.

But with all good inventions, what if they are placed in the wrong hands? Neuralink aims to create an invasive BCI-system chip implanted in the brain — no news there — but this could have dire consequences on targeted people who have not given consent. Imagine someone intruding on your brain and manipulating your thoughts, surveilling your dreams without permission, 24/7, and there was no way to stop it. I know this sounds like sci-fi horror, but this little robot is peering its head around the corner and watching you soon. Even inserting itself into you.

What does this mean for information?

We are in an information war. Elon Musk’s Twitter takeover was a punch in the face for many, leftists and media houses included. Over the years, Twitter acted as an arena for marginalized and misrepresented voices to gather and form a collective unity. We saw this through #MeToo, #RhodesMustFall and #BLM. Before, mainstream media did not hold much space for these voices, and these algorithms helped showcase new narratives in a space that eventually became mainstream. Still, these arenas managed to create social changes from a grassroots level.

Now let that sink in…

Elon’s Twitter takeover left the company culture in complete turmoil. The head of trust and safety said she resigned just a few days ago. Elon’s right-leaning politics have left a deafening silence on the once-politically fuelled platform, with media houses like NPR exiting the scene earlier this year. All in the fun of freedom of speech!

Elon Musk with sink at Twitter HQ

Twitter’s employee demographic has been predominantly male and white since 2020. African American and Latinx ethnicities remained underrepresented, with only 6.5 and 5.4 percent, according to Statista (Nov 28, 2002). Gender representation also showed that the company only represents 1% of non-binary people. Still, Elon has made it clear what his views on the gender spectrum are — including the rights of trans people. Looking at women leadership roles, a dismal 39% of women hold leadership roles in the company and 30.9% in tech roles — in comparison to men with 59.9% and 67.1% alike. This data is from March 2022.

So what does this mean for companies like Neuralink?

Parvathi A. Subbiah wrote (in this article) that algorithms are not inherently biased; after all, they follow code, right? But who writes this code? What is the coder’s agenda? Have they thought of possible cases that they, themselves, may not have experienced? Subbiah claims that the bias stems from datasets that are interpreted and collated by human beings, and this is from where the algorithms are trained, i.e. making the ways in which these data sets are interpreted extremely skewed, misrepresented and biased. So if meeting rooms and arenas like Twitter have already been compromised, what does this mean for innovations like Neuralink and consent? How will these algorithms help or benefit people when placed in the wrong hands due to accessibility and public access? And how do we help those already misrepresented through algorithms and media?

Algorithms live among us in our everyday lives. Each time you pick up your mobile phone, it leads you to pockets of carefully curated and selected information. From Google to Instagram, what you see and how you see it has already been processed and predicted. Welcome to the wild wild web.

--

--

Anna Ptasińska
Anna Ptasińska

Written by Anna Ptasińska

Freelance Journalist, Video Editor & UX Designer | Living in Berlin, Germany | Interests in culture, art, society and the politics of it all.

No responses yet