Written by Amanda Turnbull, Ph.D Candidate, Osgoode Hall Law School
Artificial intelligence (AI) systems are transforming just about every aspect of our lives: smartphones keep us connected socially; GPS navigation systems get us places safely and on time; and other smart devices, like Nest learning thermostats, help save energy and reduce energy costs in our homes. AI is also redefining how we work, what our workforce looks like, and where our workplace will evolve. All of this creates a complicated “future of work” discourse that is often replete with worry: we worry that AI will displace workers and fret that some jobs will be affected more than others with increased automation. It is not uncommon to hear the conclusion that AI is simply disruptive and that we need to quickly determine ways to mitigate the disruption. But amidst all of this worry, we tend to overlook that AI is already there.
Take for example, the auto industry; robotic process automation (RPA) was a milestone in the industry’s development in the 1960s, reducing error rates and enhancing the management of repeatable tasks.[1] In health care, advanced scanners and ultrasound have been helping doctors diagnose disease more accurately in the human body for some time. The Human Resource (HR) sector uses decision-support applications to help with the onboarding of new employees and with employee training and career advancement. These systems are designed to make people more efficient and better at what they are doing.
As the workforce continues to draw on more sophisticated AI such as chatbots like “AMI” designed to streamline HR processes like recruiting, and virtual assistants like Alexa for Business there is the question of how we integrate these workmates that emulate humans without creating other workplace problems.
The two terms—chatbot and virtual assistant— are often used interchangeably, but there is a difference, and in the spirit of getting to know our future colleagues, it is worth clearing up the difference. A chatbot—short for “chatterbot”—is an AI application that simulates the conversation or “chatter” of a human being through text or voice. Their main function is to facilitate a useful conversation with a user.
Chatbots have arguably existed since the 1970s, but they were relatively unsophisticated, using pattern matching techniques, effectively simulating conversations through the use of scripted responses.[2] Chatbots today, however, make use of much more sophisticated virtual conversation technology, known as natural language processing (NLP). This technology allows them to understand and interpret instructions, and permits users to speak to chatbots as they would another human being. Chatbots also benefit from the learning capabilities provided by machine learning (ML), which is technology using algorithms to find patterns in data without explicit instruction. Through deep learning—a subset of ML that is modeled after the operation of neurons in the human brain—chatbots teach themselves how to have a conversation as they interact with users.
Yet despite all of these improvements, recent chatbot experiments have shown that they are still prone to producing offensive speech. Microsoft’s chatbot “Tay” for instance, was released on Twitter in 2016. Modelled after a teenage girl’s language patterns, Tay had more than 50,000 followers and produced over 100,000 tweets. As she greeted the world, her early tweets were innocent, but within 24 hours of imitating her followers, she learned to become a racist, sexist and extremely distasteful chatbot that Microsoft was forced to take offline.
Virtual assistants—sometimes known as virtual agents—are essentially more advanced chatbots. Although they are similar in programming to chatbots, they are able to do more than have a conversation. Virtual assistants have the ability to serve as “assistants” and emulate human interaction while carrying out a variety of tasks such as password resets or playing music from a streaming device. Examples include Siri on Apple devices or Amazon’s Alexa.
Part of my research involves looking at how emergent technologies like chatbots and virtual assistants use language and the implications of their speech on society. Language is natural for human beings—in fact, we are “hardwired” for language which we speak, sign, write, read, and understand.[3] And when we make use of language, it does more than describe or simply transmit information; it is a form of action and effect.[4] More specifically, when we use language, it has literal meaning, social meaning, and consequences.
When we use conversational AI technologies like chatbots and virtual assistants, they have the potential to amplify our capabilities. So, for instance, chatbots may be ready and able to help HR sectors in organizations with the task of recruiting employees. But chatbots also amplify our incapabilities—or our faults—such as creating offensive comments or hate speech, and perpetuating gender bias. They also have trouble understanding nuances such as sarcasm. For instance, if I lived in an area that receives a lot of snow annually, and responded to the statement, “It’s snowing in May” with “Great,” the sarcasm in the response would not be recognized. Another component that chatbots have yet to perfect is empathy. To be truly conversational, chatbots must also equally contemplate the consequences of their speech—language is so much more than just words strung together.
Chatbots, like Tay and her successors, were entirely dependent on training data from users. Researchers believe that exposing chatbots and virtual assistants to more data will enhance their social interactions, and thus resolve their flaws. This solution, however, is still dependent upon the data that is being input and those who make the data in the first place. The tech industry, for example, is well-known for its continuing struggle with lack of diversity among the companies that create AIs.
As we navigate this complex future of work discourse, we should not overlook the manner in which we integrate AI technologies like chatbots and virtual assistants. We need to pay careful attention that we continue to strive for a respectful and inclusive workforce and take care not to exacerbate or create other problems. The price of integrating communication efficiency in the workforce through AI should not be a harmful communicator.
Amanda Turnbull, “The Future of Work: Integrating Bots and Humans in the Workforce” Canadian Law of Work Forum (May 17 2020): https://lawofwork.ca/the-future-of-work/
[1]UNIMATE was the first mass-produced industrial robot that began work consisting mainly of spot welding at General Motors in 1961. The Stanford Arm was developed in 1969 and with six degrees of freedom could perform tasks that its predecessor could not. It was followed by the more versatile Silver Arm in 1974. See https://www.computerhistory.org/timeline/ai-robotics/.
[2]ELIZA was an early NLP computer program implemented by Joseph Weizenbaum in the mid-1960s at MIT. He designed the program as a way of showing the superficiality of communication between human and machine. Weizenbaum was, however, surprised by the number of individuals who attributed human-like feelings to the machine. See Joseph Weizenbaum, Computer Power and Human Reason: From Judgment to Calculation(San Francisco: W. H. Freeman, 1976).
[3]See generally Iris Berent et al, “Language Universals Engage Broca’s Area” (2014) 9:4 PLoS ONE, DOI: < 10.1371/journal.pone.0095155> (Broca’s area is located in the frontal lobe of the brain and is responsible for all aspects of speech articulation).
[4]See J. L. Austin, How to Do Things with Words, 2nd ed (Cambridge: Harvard University Press, 1975).