People program AI, so what happens when they get hacked?

People program AI, so what happens when they get hacked?

When it comes to AI, do we live in a hack-or-be-hacked world?

Artificial intelligence (AI) is new, but it’s made in the image of something many thousands of years old: the human brain. And if you ask one psychologist, that might actually be doing AI – and ultimately people – a disservice.

“Humans are messy, complex and we are very nonlinear. Our brains are a very primal technology, but it’s the technology that we use as a model and use to create many other types of digital technology,” said Patrycja Slawuta, a researcher, social psychologist and founder of Selfhackathon.

Slawuta’s main body of research focuses on the bridge between human behaviour and technology, something she calls applied technology or psyche-tech.

“When you think about human behaviour, you can think about it as an equation: human times environment or context,” she said.

AI: An optimist or a pessimist?

When it comes to AI, people usually fall into two camps: people who say the technology will save us, and those who say it will destroy us.

“I think the truth lies somewhere in between. Technology will multiply what our human technology is. As someone told me once, ‘We’ll get the AI we deserve’,” Slawuta said.

“AI is a great enabler, but it’s always been a double-edged sword. It can be used for good and it can be used for bad.”

“We’ll get the AI we deserve.”

It’s easy to think of AI as unbiased, but the reality is humans create machines, which means human biases inevitably wiggle into AI.

“So I think it’s really crucial that we understand ourselves and how we are built. Our behaviours and psychology will only be multiplied by technology.”

Human behaviour gets exploited and used against us all the time. One of the most high-profile instances of this is alleged interference in the US 2016 presidential election. But the way Slawuta sees it, “it’s hack or be hacked.” Influencing human behaviour in this way has always happened, but technology has accelerated this process and raised the stakes.

The tech stack

To better merge technology with human behaviour, she said there are four components to the human operating system that people need to understand. First is the head, or the software component, and this is where our biases live.

“Psychologists have identified about 220 human biases, and more get added all the time. Different biases will affect different people based on their current context – where they work, who they routinely come into contact with. It’s about observing and understanding an individual’s thought architecture,” Slawuta said.

Patrycja Slawuta (far left).

The second is the heart, which is the emotional operating system. Then there is the body, which “is very interesting and sometimes the forgotten part of the human operating system.”

“It sends information to ourselves and to others. It has five senses, and those senses affect us profoundly. Even touch is such a strong conveyor of information,” she said.

And the final building block is others, because humans are hardwired to be social. This is especially important as, with the rise and rise of digital technologies, keeping a human face to the technology we create and use will be more important than ever.

The human operating system

This debate is matched with a growing appetite from those working in this space to collaborate with others to consider the human element of AI. Slawuta said she routinely fields questions about the human operating system and how it works.

According to her, the machine world mirrors the human world in many ways, but particularly in how we learn. We download apps in the form of learning new skills; we have redundancies and bugs in our wiring; and we run on a form of code that has to be continually upgraded.

“Our brains are a very primal technology, but it’s the technology that we use as a model and use to create many other types of digital technology.”

When Slawuta presents this theory at conferences, inevitably the question of who programmes the programmer is raised. This is what Slawuta likes to play with, and what she calls “what happens when human technology meets digital technology.”

“The bigger issue is who hacks the hacker? For people who write code, they run on a sort of code themselves, and if you’re not aware of your own code something or someone will hack you, and that’s what we’re experiencing right now,” she said.

As the debate about the good and bad of AI technology rages on, more companies that deal in this space are investigating the ethical implications of this close relationship. Questions abound, though. Who gets to decide what is the right or wrong behaviour for a machine? What would AI with a conscience look like?

One thing that is clear, said Slawuta, is that it’s time for this conversation to move past just focussing on AI and what it does to how it might ripple through society.

“I think there is a growing opportunity for historians, psychologists, philosophers and more to step in and contribute to this discussion.”

Full STEAM ahead

One trend she highlights as a possible solution is the pivot from STEM to STEAM programs. Incorporating the arts and humanities into science, technology, engineering and maths education is, according to her, a crucial step in mitigating the effects of bad AI while multiplying the effects of good AI.

“If those developing AI and software don’t understand the human element, it can easily be misused. Most cyber security threats sneak into a system because of human error. Understanding that and incorporating it into how we educate engineers or others will only grow in importance.”

Where this field will go from here is anybody’s guess, she said, and those collective human behaviours will continue to influence how technology is built and used. But the risk of not doing anything to bridge the gap is too high.

“Forgetting how powerful those negative forces are, and how quickly things can turn around, I think we’re playing a dangerous game there.”

Exit mobile version