We spoke with Dr. Brian Subirana, director of the MIT Auto-ID Laboratory. His Ph.D. is in artificial intelligence, and he teaches at both MIT and Harvard. At the MIT 2018 Crossroads event, Subriana spoke about conversational commerce. NextGen caught up with him to explore in more detail where conversational commerce is today and where it’s headed in the supply chain.
NextGen Supply Chain: We probably need to start at the beginning here. What exactly is conversational commerce?
Subirana: Without getting too deep too quickly, conversational commerce is the process of engaging with digital infrastructures through human-like conversations. If you have ever used Amazon’s Echo to order an item, you know what conversational commerce is but may not have called it by its name.
Instead, you think of it as just Alexa. And that’s only one common example. Apple’s Siri is another. Google has one too. Garmin does too. In fact, I was quite surprised at Crossroads when I talked about the Garmin Speak device that no one had heard of it. One may argue that it is only a rudimentary combination of conversational commerce with self-driving features but that´s because there hasn´t been time yet to fully unfold the potential of the voice ecosystem.
NextGen Supply Chain: Part of the challenge here is that few people call conversational commerce by that name. Instead it’s just Alexa, as you said. And to most, it probably seems like a bit of a toy rather than by what it is, an important supply chain technology of the future.
Subirana: Oh, it’s no toy. It’s a lot more than asking Alexa to play Frank Sinatra. In fact, conversational commerce has the potential to be a key player in the first, middle and last miles in the supply chain. Can anything be easier to use than your voice to order white beans, for instance? Then think about the use of conversational commerce to manage the picking and packing and shipping of those white beans. It’s hands free. It’s fast. It’s intuitive. That’s a pretty good start.
Then think about the consumer receiving that shipment of white beans and using conversational commerce to confirm delivery. There’s no handheld device for the delivery person to power up, hand it to you, then ask you to sign and hand the device back. With voice, you will hear “did you get the white beans” and you simply respond “yes”. Can’t get much easier than that. Combine this with computer vision and we can get rid of telephones and handheld devices all together.
And it’s not just for processing orders. Smart glasses with conversational commerce have long been used in closed-garden equipment maintenance in facilities world-wide. This is what we can call conversational commerce version 1.0. The supply chain is all about transactional efficiency among different players.
With conversational commerce, it’s easy to save at least an order of magnitude of time while increasing transactional efficiency and reducing costs. When interactions are done between the digital infrastructure and an open set of conversants, we have conversational commerce version 2.0 with Amazon Echo, Google Home and others.
NextGen Supply Chain: How close are we to all this being common rather than a novelty?
Subirana: We are definitely in the early stages. I expect that conversational commerce will be the first major application of artificial intelligence. And it will be an explosion. It just has too much going for it to be anything but major. But there’s a lot that has to happen first. To begin, all the devices out there now are pushing their own isolated ecosystems.
They are siloed applications. Siri and Alexa can’t talk to each other. For conversational commerce to reach its potential, that has to change. For a while, Alexa would not say “ok google” even if you asked it to repeat what you said. It was about the only thing it would not repeat – fortunately Amazon took it to school.
I feel we will need devices from different suppliers that can converse with each other as well as people and can talk to people in a consistent way. And the first step will be development of standards.
Right now, I’m exploring with Target, OTTO Group, Intel, Cap Gemini, Cole and others to establish a standards approach. We’ve been at it for about a year now. We feel that a standardized language that enables engaging openly with various conversational commerce devices will bring what we call conversational commerce 3.0.
NextGen Supply Chain: Is there a version 4.0?
Subirana: Yes, when machines start conversing with humans without humans knowing they are talking to a machine. For a preview, check the recently demonstrated Google Duplex which makes calls on your behalf. We may eventually need to standardize if and when machines tell us they are machines.
NextGen Supply Chain: What’s a good example of something that needs to be standardized in the short term.
Subirana: Let’s start with the “wake command.” How do you start a conversation? Is it with your voice? With a button? Through a security device? And most importantly, can we have a “Target” wake word in any device. If so, how is such word registered? How is interpreted? This is not possible today. For example, with the Amazon Echo, one needs to ask Alexa to route commands to Target every time. However, you really get tired of saying “Alexa” if you have eight Echo devices spread through your house, car and office like I do.
NextGen Supply Chain: I want to go back to being able to talk to devices from different manufacturers and to having devices talk to each other. What are the challenges there?
Subirana: There are many. But consider just this one. Right now, Alexa is a closed system. Amazon is still building out applications for it. I get an e-mail weekly on Alexa’s new conversational capabilities. Basically, it’s all pre-determined tasks. Conversational commerce 3.0 will not be a closed-garden solution.
That’s where artificial intelligence will take over. The AI on these devices will determine what happens next, not some pre-set list of tasks. Just think about all the ramifications of 3.0 in a warehouse or distribution center. The right equipment needs to do the right things to get the transactional efficiency from the technology.
NextGen Supply Chain: Just to wrap up here, where do privacy issues fit in here?
Subirana: That’s a big one. Voice can tell so many things. To begin, voice is very emotional. It can also identify someone’s gender. Based on an accent, it knows where the person is from. There are plenty of other personal traits that voice can identify, even now. Privacy makes standards even more important. And they will stretch all the way from protecting an individual’s personal information to managing legal contracting. We have a big job ahead of us, but it’s going to be fun and more human.
Gary Forger is the special projects editor for Supply Chain Management Review. He can be reached at email@example.com.
- On-demand warehousing grows in the supply chain
- NextGen Interview: Nick Vyas, USC’s Marshall Center for Global Supply Chain Management
- Picking robots expand capabilities
- Robotics ready for work at the Rochester Drug Cooperative
- Supply chain risk reader survey
- NextGen Interview: Rich Sherman, Supply Chain Council of Excellence at Tata Consultancy Services