Perhaps the most visible applications of artificial intelligence today are “conversational agents” — chatbots or intelligent assistants that interact with people via voice or text.

Think consumer-facing applications like Google Assistant, Siri, Microsoft Cortana, and Amazon Echo. However, companies are fast becoming interested in how similar technologies can be used to enhance internal operations, notes a new report from KPMG, “How may A.I. assist you?”

Conversational agents can provide value in a number of different ways. For one, they can be trained to assist teams by observing and capturing team interactions. “As the bot learns more about the group’s patterns and activities, it becomes a virtual team member, with a photographic memory for past discussions, action items, tasks, and reminders,” the report states.

Such agents are most commonly used on collaborative platforms like Slack for tasks like scheduling meetings, translating text, and ordering lunch.

Second, conversational agents can streamline office operations. For instance, IT help-desk bots might lead employees through common multistep procedures such as password resets, booking meeting rooms, and guiding employees on content searches.

Third, these agents can be employed to deliver information. “For example, a ‘financial reporting bot’ can provide granular fiscal figures at month-close, reducing the demand on financial analysts to craft customer reports,” says KPMG. This type of bot also allows for faster ramp-up and repurposing of resources, thus reducing retraining.

The report sets forth four foundational capabilities for conversational agents deployed in the workplace:

They must learn. “A conversational agent would be useless if it could not grow alongside its employee counterparts. Without some signs of learning, employees will build mental walls around what the agent can and can’t do, potentially shrinking its user base overall impact.”

They must fail usefully. The agent must know when a conversation is verging past the limits of its understanding and continually capture and catalog such events to ensure capability gaps are closed in the next round of training, KPMG notes.

They must be personalized. Employees are most likely to reuse a conversational agent that is accommodating — using appropriate language and tone, identifying the same priorities when making decisions, showing empathy for users when it’s needed, and performing efficiently when it’s demanded.

They must prioritize user experience. “At one time or another, conversational agents will be exposed to unexpected kinds of situations. In such events, the agent’s response must go beyond simple exception handling in order to preserve user rapport. Even if its response is unsatisfactory, it must be able to communicate the limits of its understanding.”

Aside from the need to incorporate those foundational capabilities into conversational assistants, companies face some other adoption challenges, KPMG points out:

  • Scarce training data: Even in a very large organization, an internal workforce will have a smaller user base than a popular consumer-facing application.
  • Risk concerns: In domains where conversational agents provide guidance, such as human resources or compliance, the company may want to review and vet all responses before bot deployment.
  • Accessibility: “The company should use inclusive design practices to ensure the bot interface is accessible to all potential users, regardless of working situation, capability, or language proficiency.”
  • Privacy: Conversational agents in the workplace learn by collecting feedback and behavioral data. Companies should be transparent about their use of data, give employees an opportunity to choose to participate, and ensure that employees can access their conversation history and temporarily turn off the bot’s listening capabilities, according to KPMG.

The report offers some additional pro-employee tips for bot development.

First, if a bot will replace humans in completing some tasks, “tell the humans first,” KPMG advises. Second, tell people that a bot is a bot: “It is counterproductive to hide from employees the fact that they are talking to a bot. Working with a bot should be viewed as a value add, not a dark secret.”

Third, show users when the bot has learned. “Employees are far more accepting of a bot’s failures to understand if they are able to see that the bot is improving,” the report says. “Incrementally adding new capabilities instills a sense of trust that the bot will be able to assist with their future requests.”

, , , , , , , , ,

Leave a Reply

Your email address will not be published. Required fields are marked *