Just glancing through the teams it looks like UCD was the only team with a member (Chun-Yen C) that has industry experience. Everyone else pretty much stayed inside academia. Wonder how much of a factor that was to their success.
I wonder if a team heavy on industry experience would have been selected compete.
Also, I wonder what Amazon gains from this? My cynical hat wants to think Amazon would have made the competitors sight over IP rights etc, but I haven’t looked in to the conditions of the completion so I’m spectacularly.
Does anyone know how this is likely to pan out for the students, and not only those who won? Each team was given US$250,000, so it’s sort of paid work?
I don’t particularly have a problem with a company fielding skills in that way, just pondering.
>Amazon would have made the competitors sight over IP rights etc,
I would like to know this too. I think that since it is an academic competition, by default everything is open source and the method must be fully outlined in the system description paper, so I don't think IP would be an issue.
> Some of the distinguishing features of the bot included incorporating language disfluencies — pauses such as “hm” or “ah.”
Note that Google recently faced backlash over this very feature demoed in their Virtual Assistant. I'm guessing that public sentiment won't last long, it's a natural part of human language and it feels funny when it's missing.
Each confrontation of the uncanny valley is an uncomfortable experience that gives us pause and causes us to consider the dangers of interacting with AI. It's a healthy learning experience that will linger with any part of the population that already distrusts AI. Since you probably want to sell to those people too, why not make allowances?
Disfluencies and breath sounds are peculiar when they're missing from human speech, so not injecting them into AI speech would be an easy design choice to distinguish a speaker as artificial, perfect for Virtual Assistants. For theatrical functions, in video games and movies, it makes sense to add as many human vocal quirks as possible, even the occasional resonance obstructions to simulate colds or a whistle for gap teeth.
I think you're trying to project and are seeing more than what there really is.
For 99,9% of the people who have an issue with such things it's not about "oh, it's hiding very well as a human and could take over, I distrust AI more due to it".
It's about the uncanny valley problem of our brain being pattern matcher, and having difficulties classifying the AI using these new features as the "human pattern" or the "AI pattern" and so we don't like it / feel weirded out by it. Give it a few years and suddenly it has become part of the AI pattern and all is well.
I'm not saying distrust toward AI is growing or not growing, but that it doesn't even play at all in the public reaction to such AI features.
I hope I'm not projecting a distrust of simulacrum. I distrust other humans, not our tools. But I'm not oblivious to my bubble and people outside it that have no idea how these tools work or how others might use them to their direct/indirect disadvantage. People create stories to explain their losses and sometimes blame tools.
I understand the uncanny valley as a product of a pattern matching exercise that humans developed well before we encountered AI. A part of our "friend or foe", "like me or unlike me" mechanism. If a user considers the tool a "potential foe" (for instance, a customer service resolution bot) every shallow attempt by the tool to approximate "friend" could further alienating the user. I don't think the "weird feeling" just goes away at that point.
I'm also pretty sure that we're going to be in the uncanny valley for long enough to make use of that "weird feeling" for effect outside of fiction.
I have a challenge among my friends to order a sub at Subway without saying "um, "uhh" etc. It's fun and it always results in sounding like a robot or Shakespeare ordering a sandwich.
Weird that a university press release wouldn't make it easy to find actual details for work in question, but a few links deep you can find a paper on the winner:
> Gunrock: Building A Human-Like Social Bot By Leveraging Large Scale Real User Data
Abstract:
> Gunrock is a social bot designed to engage users in open ___domain conversations. We improved our bot iteratively using large scale user interaction data to be more capable and human-like. Our system engaged in over 40, 000 conversations during the semi-finals period of the 2018 Alexa Prize. We developed a context-aware hierarchical dialog manager to handle a wide variety of user behaviors, such as topic switching and question answering. In addition, we designed a robust threestep natural language understanding module, which includes techniques such as sentence segmentation and automatic speech recognition (ASR) error correction. Furthermore, we improve the human-likeness of the system by adding prosodic speech synthesis. As a result of our many contributions and large scale user interactions analysis, we achieved an average score of 3.62 on a 1 − 5 Likert scale on Oct 14th. Additionally, we achieved an average of 22.14 number of turns and a 5.22 minutes conversation duration.
Does anyone have a recording of the 9 minute conversation? Would love to hear how this sounded in person. Tried to find it on Youtube and Google, no luck.
So, is 10 minutes enough to pass a Turing test, assuming
speech counts equally to typed text? Or is that
question like asking if a submarine can dog-paddle?
The Grand Challenge, of maintaining a coherent and engaging conversation for 20 minutes, still remains. As does a $1 million unrestricted gift which will be awarded to the winning team’s university, if their socialbot meets this challenge...
> In the US, ML/AI grad school has significant Chinese and Indian student population. It's not surprising to see this team composition.
This is the biggest wealth and value US has over the rest of the world. US universities have historically attracted the best kids from around the world who learned the American way of life, innovated and built the country. The greatest proof of meritocracy is right here.
> In the US, ML/AI grad school has significant Chinese and Indian student population
It is somewhat surprising to see that, though. I suspected something like this just from the sample of author names on papers I occasionally skim. I'm curious what's the story behind this? Why ML/AI in particular, instead of other branches of CS?
Rather, I think you were reading too much into the original statement. He talked about ML/AI because the threads are about ML/AI. He wasn't saying only ML/AI experienced this. His point was about ethnic population proportions and how it isn't strange that we see what we see. I wonder if the two of you actually agree with each other in reality.
Why is this a relevant question? They got their education from the university so the university deserves partial credit no matter the nationality of each student involved.
You could have asked so many good questions that would have make you smarter: What's their technical background? Can you explain the winning proposal in layman terms? Anything, but you had to go with the "R them really mericans?" question; I think people that the first thing they do is bring those kind of questions are (ironically?) what makes America much less than it could.
You insinuated that the team must be from China because a majority of the team members look Asian. I don't understand why you thought that was important enough to bring to attention.
I do not care how many chinese there. There should be at least 1/5 if one must be all equal. But the shock is where is the westerner (know this is American web site should I said American or just white).
I know chinese has been represented well in computer field. But this is odd. When I learn AI from afar, most are westerner