When an AI converses like a human but operates like a machine, the biggest risk isn’t the technology—it’s our assumptions.

There is much talk about human machine teams but what does that really mean and what are the risks and benefits?

One thing research identified is that we attach emotions and behaviours through anthromorphism, this can be a risk with how we design our robotic/AI team mates to look. Dr Helen Dudfield is interested if they should be clearly alien to not trigger this tendency or is there an advantage in the familiar when expectations and behaviors are designed? And what does this mean when the robotic teammate or AI agent converses naturally with us but operates differently to our human assumptions?

Trust, team work and decision making are all at play here but so is solution design!