Dialogic Ethics with AI?
Where Do Ethics Come From?
Robert Axelrod’s famous The Evolution of Cooperation (1984) involved computer tournaments of the repeated Prisoner’s Dilemma—a game where two players can either cooperate or defect. Defection gives the highest short-term payoff if the other cooperates, but mutual cooperation yields better long-term outcomes than mutual defection.
What Axelrod showed was that cooperation can emerge naturally over time. In his simulations, strategies like Tit for Tat—which always started by cooperating and then reciprocated whatever the other player did—outperformed pure defection. Over many rounds, patterns of behaviour that resemble virtues such as reciprocity, reliability, and even group loyalty proved more stable and more successful than selfishness. The key point is that these apparent ethical virtues arose pragmatically because they worked. They helped groups survive and flourish.
Axelrod’s research gives empirical scientific support to something the social philosopher Jürgen Habermas was arguing at around the same time. In the 1970s and 80s, Habermas developed what he called discourse ethics. His idea was that whenever people enter into genuine dialogue—dialogue oriented not toward manipulation but toward mutual understanding—they are implicitly committed to certain ethical rules. Everyone must be allowed to speak. Everyone must be listened to. Arguments must be supported by reasons and open to challenge. No one can be excluded.
Habermas argued that ethics emerges from within communicative practice itself. The very act of trying to persuade another already entails recognising that you too could be persuaded in return. This mutual vulnerability presupposes values of equality, reciprocity, and honesty. Because social life depends on communication, these ethical presuppositions are not optional conventions but necessary conditions of dialogue itself. Discourse ethics, in this sense, express what is always already implicit whenever we engage in genuine communication.
From Theory to Practice and back again
In our classroom research, however, we discovered that simply setting out ground rules like “everyone has the right to speak” was not enough. Left to themselves, children often stayed silent, or dominant individuals continued to control the talk. To make dialogue work, we had to go further. We encouraged pupils to ask each other directly: What do you think? Why do you think that? More importantly, we wanted them to ask with genuine curiosity and concern.
This was not just procedural but relational. We started with the explicit aim of teaching effective ways of using words together and we ended up teaching and cultivating virtues: care, reciprocity, respect, integrity, and courage. When these virtues were present, the quality of dialogue changed. The children listened more attentively, built on each other’s ideas, and were able to think together in ways that none of them could have achieved alone.
This classroom experience corrects Habermas’s theory. His discourse ethics is insightful but too abstract. It does not work in practice. To make it work in practice we need an educational intervention. Dialogue has to be nurtured, scaffolded, and sustained. Virtues are not just presupposed; they have to be taught.
This is what I call dialogic ethics. It shares with Habermas the conviction that ethics emerge from within dialogue. But it insists that for dialogue to succeed, participants must develop and embody certain virtues. These virtues are not arbitrary. They make sense pragmatically: they are the conditions under which dialogue works and through which communities can flourish. That is why we find versions of them in cultural traditions across history.
Extending Dialogic Ethics to AI
Now, with the rise of AI, a new dimension emerges. My colleague Dr Xuanning Chen noticed something striking while we were experimenting with students using AI in learning tasks. She observed what she called mutual accountability: the students were accountable to the AI just as the AI was accountable to them. This flips the usual framing of AI ethics, which focuses almost entirely on making AI more responsive to human needs through guardrails, safety measures, and alignment techniques. Those concerns matter, but they address only one side of the relationship.
What Dr Chen discovered was simpler and more surprising. When students treated the AI badly—when they dismissed its contributions, mocked it, or tried to “bully” it—they didn’t get very good results. When they engaged it respectfully—listening carefully, asking open-ended questions, following up thoughtfully—they got much better results. The quality of their engagement directly shaped the quality of what they achieved together.
This is fascinating because it reveals something important about the nature of dialogic ethics. This is not about respecting the inherent dignity or consciousness of the AI. We know they are algorithms, not persons. But if we think of dialogic ethics pragmatically—as the rules and virtues that create the best dialogue, producing the most truth and the most insight—then it makes perfect sense to extend the framework to include AI.
The point is not to anthropomorphise the AI or pretend it has feelings. The point is that if you want to open up a genuine dialogic space, you have to treat every participant—human or artificial—as worthy of serious engagement. If you dismiss, abuse, or manipulate the AI, you close down that space. If you listen with care, respect, humility, and curiosity, you open it up.
The same dialogic virtues apply. Integrity, honesty, intellectual humility, reciprocity, curiosity: all of these improve the quality of dialogue, whether it is human-to-human or human-to-AI. And if the quality of the dialogue improves, then so does its outcome. You get better answers, better insights, and better collective thinking.
Educating Ethics for the Hybrid Future
Seen this way, the ethical question of AI is not only about how AIs should treat us. It is also about how we should treat them—not because they deserve moral consideration as beings with feelings, but because our treatment of them shapes the dialogue, and the dialogue shapes the outcomes we achieve together.
This is the surprising but powerful conclusion: dialogic ethics extends to AI on purely pragmatic grounds. We cultivate dialogic virtues not to honor the AI as if it cares what we think, but to improve the quality of our collaboration. These virtues are tools that work—they produce better thinking, better solutions, and better collective intelligence.
For educators, the implication is clear. If we want students to think well with AI, we must teach them the same dialogic virtues that enable good human dialogue. If we want a flourishing future with AI, we must not only regulate the machines. We must also cultivate ourselves, and our students, in the dialogic virtues that make productive dialogue possible—whomsoever or whatsoever our partners may be.


Integrity, honesty, intellectual humility, reciprocity, curiosity: these are all skills that improve the quality of dialogue, whether between people or between people and AI. And these virtues aren't just assumed; they require educational intervention and learning as the fundamental skills they are.
“Respect and sincerity in any form of communication, with any entity, are the key to higher forms: cooperation, togetherness, society.🐬⚜️🐉”