In the past 30 years, I have interacted with all sort of attempts to have a machine conversation, starting with the classic adventure game that I would get bored talking to (by keyboard) after 10 minutes.
I've had some conversations like that with humans too.
The worst is software that requires 10 clicks in the same exact order every time you try to do something. Voice recognition and bots are better, though get most frustrating when they can't learn.
The promise of all this is AI. Someday the machine will learn well enough to retains things.
Wouldn't it be cool if your software recognized that you tried to get to something with the same 10 clicks in a row and suggested that it help you skip all that in the future. Today, that requires a lot of programming by humans. Soon, it will be part of the software.
Siri rival can understand the messy nature of our conversations
By Niall Firth
We take digital personal assistants for granted these days. Whether it’s looking for the nearest Mexican restaurant, sending a message or just checking the weather, we’re getting pretty comfortable with Siri and Alexa.
But these systems are still limited: they only deal with one task at a time, and more complicated interactions can leave them confused.
Iris, a chatbot system developed by a team at Stanford University, is different. It can handle more complex forms of conversation – and could pave the way for personal assistants that understand how we really speak to one another.
When wetalk, we use all sorts of linguistic tricks and techniques to make ourselves understood. One of the most common is the way we nest sub-conversations within an overarching discussion. You do this, for example, when you answer the question “when shall we meet at the pub?” by asking a further question about when that person finishes work.
Alexa or Siri struggle with such nested conversations unless they have been preprogrammed – or hard-coded – to react to specific examples.
Iris does it by turning language commands into blocks of text that can be flexibly combined with other ones. This design allows every user command (such as “make a reservation”) to be tagged with instructions that tell Iris how it can be stitched together with further commands.
This also narrows the range of other types of command that the tool can act on in the context of the conversational strand. Thus armed, Iris can thread a series of commands together to make sense of them.
Reading the context
Furthermore, Iris understands another conversational quirk called anaphora: a phrase that depends on an earlier part of the conversation, such as saying “he” when you earlier mentioned your brother. Again, the top digital assistants have this ability, but only when hard-coded.
Iris is still a bit limited, which means that for now it’s only being used as a bespoke data science tool. It lacks the natural language ability that Apple, Google and Amazon have baked into their assistants. But in the future, these could integrate Iris’s underlying architecture, providing “a scaffolding of context” for a future generation of chat bots, says Ethan Fast, part of the team behind Iris.