Make your chatbot likeable with the right voice and tone
Author
Ineke van Gelder
Published
26 June 2017
Reading time
7 minutes
Anyone who makes contact with an organization via chat will quickly notice whether there is a person or a bot on the other end. Does the other person not understand your question correctly and are you asked to repeat your question (several times)? Do you get a joke or apologetic response? Then we often conclude that we are dealing with a bot.
The way in which organizations make their bots respond to questions determines the user's experience of the dialogue. And although more and more bot platforms are equipped with artificial intelligence, there are only few bots that can properly answer the widely varying questions of users. And nothing is more annoying than a chat in which the other person does not understand your question, and - instead of asking in-depth questions - continues to indicate that he does not understand you.
Bots not equipped with Artificial Intelligence will therefore also have to call on its users sense of forgiveness in their language use. Forgiveness that keeps the user from dropping out and continuing the chat, as he finds dialogue still effective and fun. Achieving forgiveness with language: but how?
Tone and voice
Recently we experimented with a team using a simple chatbot for making an appointment. Beforehand, we had worked out the personality for the bot in terms of the voice (personality). For the bot's personality—the constant factor in everything we would make the bot do—we'd worked out a persona in terms of goals, character, needs, and aspirations. In the same way we had developed personas for the target group.
The bot's personality is reflected in, among other things:
- The name;
- The avatar;
- The flexibility the bot shows to adjust previous answers, stop or resume the conversation.
In the elaboration of the dialogue, we gave substance to the corresponding tone in language use, choice of words and intention. The tone is noticeable in all messages from the bot, but it is tailored to the phase in the dialogue: introduction, question, detailing, solution options, result, and conclusion. But also on the emotion of the user. Responses such as "I don't want to say that", "Never mind", "Why do you want to know?" ask for a different tone than "Next Wednesday at 10:00, please." The tone is reflected in:
- The opening sentence with which the bot addresses you.
- The placeholder text in the input field during the dialogue
- How the bot indicates what it expects from you (fixed answer options or open questions).
How the bot reacts when he/she recognizes questions or answers, but especially how the bot behaves if it doesn't understand what you mean. Inspired by Mailchimp's Voice & Tone Guide, we chose responses that match the emotion of both the bot and the user.
Consciously incompetent
Because our bot was still young and not that intelligent, we chose a youthful, somewhat simple personality that makes up for/camouflages his ignorance with jokes. This allowed the bot to apologize for input it didn't understand without falling out of position. The assumptions:
- A consciously incompetent chatbot can jokingly compensate for its incompetence.
- A consciously incompetent chatbot encourages forgiveness in users.
You see this with many chatbots that are based on open question input:
The first findings
The bot's personality was consistently described by participants in our usage study as young, tongue in cheek, and casual. The tone of the bot appeared to be in line with the personality, but at times it worked and sometimes it didn't work at all. A number of things came forward:
- The bot's youthful language and jokes made users more forgiving in case the bot didn't understand what they meant.
- Informal language is appropriate for a bot that doesn't understand everything.
- Humor listens closely (also in bot language); sometimes it works and sometimes it doesn't. Several funny reactions in succession (“Have you turned your watch upside down?”, “That sounds like another leap day.”) quickly evoke irritation.
- Certain language ('My GPS can't find that location', 'Baarn it is.' 'Hi Robin, nice to have you here') is quickly associated with someone/a bot in a certain age category.
- Basic things such as the format in which the bot indicates the date and time, a period or address, also partly shape the personality. The bot’s system date and time format in the response, like in: "Okay, you have an appointment on 2017-06-14 at 01:00:00." doesn't seem to match the bot's tone in the rest of the dialogue.
- The reaction speed of the bot too, determines the experience. If the bot reacts very quickly, it will appear less human.
Clone the tone
To obtain a better understanding of the effect of the bot's tone on the user's forgiveness, we experimented with two tone variants:
- Youthful, tongue in cheek
- Serious, neutral
Following are four typical situations: the customer emotion, the two variants of the tone used, and the conclusion from our user research. This concerns situations in which:
- The user sends an unexpected message to the bot.
- The user makes a typo or uses a format that the bot does not recognize.
- The user gives an answer that is not recognized and the bot continues to ask until an answer comes that is recognized.
- In its response, the bot both confirms the user's answer and asks a follow-up question.
For each situation we have put the emotion and the tonal variations of the bot side by side.
Situation 1: An unexpected message from user
This situation occurs if the user does not answer the question posed by the bot, the user brings up an unexpected topic, or confronts the bot with comments or questions.
Dialogue A
- User intention (experimental, challenging): "I want to meet on the moon."
- Chatbot (youthful, tongue in cheek): "I'm only a bot, and this concept is really new to me."
- Chatbot (seriously neutral): "Sorry, I don't quite understand what you mean. If you greet me one more time, we'll start again.” Dialogue B
- User intention (experimental, challenging): "How are you?"
- Chatbot (youthful, tongue in cheek): "If you greet me one more time, we can try again."
- Chatbot (seriously neutral): "I'm sorry, but I don't know where that is. Please choose a Dutch municipality or at Informaat. Which is best?' Dialogue C
- User intention (experimental, challenging): 'How did you get those green ears?'
- Chatbot (youthful, tongue in cheek): 'Not on the map. We can meet with you in a Dutch municipality, or else at Informaat?'
A little humor works well in situations like these. The 'youthful' variant fits better with the challenging, investigative question of the user.
Situation 2: Typo or unknown format
This situation occurs if the user accidentally makes a typo in a sentence or uses a notation format that the bot does not recognize.
Dialogue A
- User intent (sincere, accidentally made a typo): "Wendsday."
- Chatbot (youthful, tongue in cheek): "Sorry, but that sounds like another leap day. Would you like to try again?”
- Chatbot (seriously neutral): "Sorry, I don't understand this date. Would you like to try again?”
Dialogue B
- User intent (sincere, accidentally makes a typo): "Tuesday two half of the day."
- Chatbot (youthful, tongue in cheek): "I think you have your watch upside down. Would you like to try another time?”
- Chatbot (seriously neutral): "Sorry, it's just me. Would you like to indicate a day or date?”
Here the youthful, informal element quickly arouses irritation. The user does something accidentally. The bot may then indicate that the input is not recognized, but not too often and with an explanation of what does work.
Situation 3: Recurrence
If the user's answer is not recognized after trying several times and the bot persistently repeats the question (in slightly different ways).
Dialogue A
- User intention (sincere, convinced of clarity own question/answer): 'In 8.5 weeks.'
- Chatbot (youthful, tongue in cheek): 'I'm just a bot. Would you please name a date?”
- Chatbot (seriously neutral): "Sorry. I do not understand that. Would you please specify a date?”
Dialogue B
- User intention (sincere, convinced of clarity own question/answer): 'I don't care.'
- Chatbot (youthful, tongue in cheek): "Sorry, what time please?"
- Chatbot (seriously neutral): "I'm sorry, but you really have to specifiy a day and month."
Repeating the same question in slightly different wording cannot hide the fact that the bot can handle only a limited number of answers. In these situations - even if the bot can't handle anything with the answer - it is better to opt for a neutral response. Or to accept what the user says and continue to the next step in the dialog.
Situation 4: Confirmation and follow-up question
Once the conversation between bot and user has started, the bot's messages are often a combination of confirming the last choice and asking a follow-up question. This makes the bot's messages quickly become long.
Dialogue A
- User intent (can adjust answers and achieve end goal): 'Oops'
- Chatbot (youthful, tongue in cheek): 'Nice, we know Mengvoeders United well. Which account manager would you like to make an appointment with?'
- Chatbot (seriously neutral): "We would like to make an appointment with you, Herman. To be prepared for that appointment, I would like to know a little more about your background. Which organization do you work for?'
Dialogue B
- User intention (can adjust answers and achieve end goal): "I would like to choose another date."
- Chatbot (youthful, tongue in cheek): 'You certainly have something to tell each other about customer journeys, Herman. Your appointment is on May 13, 2017 at 11:00 in Baarn. Before you go, what did you think of this conversation?”
- Chatbot (seriously, neutral): 'Herman, thank you for making an appointment with us. Which organization do you work for?'
Allow the user to edit a previously given answer at any point in the dialog. Or summarize the user's choices after a few questions rather than after each question. This prevents you from checking after every step in the dialogue whether the bot and the user still understand each other well.
Continue to positively surprise your users
The informal, funny tone of a consciously incompetent bot especially leads to forgiveness at moments in the dialogue where the user is clearly pushing the bot to its limits. With a funny bot you can camouflage its limitations well at these moments, as the user is more inclined accept them.
In the more task-oriented phases of the dialogue, this language use of a consciously incompetent bot fits less well and even provokes irritation. Limitation problems are better solved in these phases by applying neutral apologies. Forgiveness is achieved here rather with a clear explanation of what the bot can and cannot handle.
In addition, keep in mind that many chatbot users enjoy exploring the limits of a chatbot with unexpected questions and responses. Make sure that your bot also continues to positively surprise that group. Perhaps in the same way a good customer service representative does.
This post originally appeared on Frankwatching (June 24, 2016).
About the author

Content strategy
Chatbot