Hacker News new | past | comments | ask | show | jobs | submit login
Student Team Wins Amazon Alexa Artificial Intelligence Challenge (ucdavis.edu)
103 points by 11thEarlOfMar on Dec 1, 2018 | hide | past | favorite | 40 comments



Just glancing through the teams it looks like UCD was the only team with a member (Chun-Yen C) that has industry experience. Everyone else pretty much stayed inside academia. Wonder how much of a factor that was to their success.


The teams were selected to compete.

I wonder if a team heavy on industry experience would have been selected compete.

Also, I wonder what Amazon gains from this? My cynical hat wants to think Amazon would have made the competitors sight over IP rights etc, but I haven’t looked in to the conditions of the completion so I’m spectacularly.

Does anyone know how this is likely to pan out for the students, and not only those who won? Each team was given US$250,000, so it’s sort of paid work?

I don’t particularly have a problem with a company fielding skills in that way, just pondering.

1. https://developer.amazon.com/blogs/alexa/post/9f406f35-c997-...


>Amazon would have made the competitors sight over IP rights etc,

I would like to know this too. I think that since it is an academic competition, by default everything is open source and the method must be fully outlined in the system description paper, so I don't think IP would be an issue.


Just because something is open source doesn't mean the IP is granted to everyone.


Glad to see my alma mater doing well.

> Some of the distinguishing features of the bot included incorporating language disfluencies — pauses such as “hm” or “ah.”

Note that Google recently faced backlash over this very feature demoed in their Virtual Assistant. I'm guessing that public sentiment won't last long, it's a natural part of human language and it feels funny when it's missing.


Each confrontation of the uncanny valley is an uncomfortable experience that gives us pause and causes us to consider the dangers of interacting with AI. It's a healthy learning experience that will linger with any part of the population that already distrusts AI. Since you probably want to sell to those people too, why not make allowances?

Disfluencies and breath sounds are peculiar when they're missing from human speech, so not injecting them into AI speech would be an easy design choice to distinguish a speaker as artificial, perfect for Virtual Assistants. For theatrical functions, in video games and movies, it makes sense to add as many human vocal quirks as possible, even the occasional resonance obstructions to simulate colds or a whistle for gap teeth.


I think you're trying to project and are seeing more than what there really is.

For 99,9% of the people who have an issue with such things it's not about "oh, it's hiding very well as a human and could take over, I distrust AI more due to it".

It's about the uncanny valley problem of our brain being pattern matcher, and having difficulties classifying the AI using these new features as the "human pattern" or the "AI pattern" and so we don't like it / feel weirded out by it. Give it a few years and suddenly it has become part of the AI pattern and all is well.

I'm not saying distrust toward AI is growing or not growing, but that it doesn't even play at all in the public reaction to such AI features.


I hope I'm not projecting a distrust of simulacrum. I distrust other humans, not our tools. But I'm not oblivious to my bubble and people outside it that have no idea how these tools work or how others might use them to their direct/indirect disadvantage. People create stories to explain their losses and sometimes blame tools.

I understand the uncanny valley as a product of a pattern matching exercise that humans developed well before we encountered AI. A part of our "friend or foe", "like me or unlike me" mechanism. If a user considers the tool a "potential foe" (for instance, a customer service resolution bot) every shallow attempt by the tool to approximate "friend" could further alienating the user. I don't think the "weird feeling" just goes away at that point.

I'm also pretty sure that we're going to be in the uncanny valley for long enough to make use of that "weird feeling" for effect outside of fiction.


I have a challenge among my friends to order a sub at Subway without saying "um, "uhh" etc. It's fun and it always results in sounding like a robot or Shakespeare ordering a sandwich.


The worry was that people talking to Google's assistant over the phone might think they were talking to a person. Not so much a problem with an Alexa.


Agreed the more we can trick users into thinking they are getting actual help the better


Weird that a university press release wouldn't make it easy to find actual details for work in question, but a few links deep you can find a paper on the winner:

https://s3.amazonaws.com/dex-microsites-prod/alexaprize/2018...

> Gunrock: Building A Human-Like Social Bot By Leveraging Large Scale Real User Data

Abstract:

> Gunrock is a social bot designed to engage users in open ___domain conversations. We improved our bot iteratively using large scale user interaction data to be more capable and human-like. Our system engaged in over 40, 000 conversations during the semi-finals period of the 2018 Alexa Prize. We developed a context-aware hierarchical dialog manager to handle a wide variety of user behaviors, such as topic switching and question answering. In addition, we designed a robust threestep natural language understanding module, which includes techniques such as sentence segmentation and automatic speech recognition (ASR) error correction. Furthermore, we improve the human-likeness of the system by adding prosodic speech synthesis. As a result of our many contributions and large scale user interactions analysis, we achieved an average score of 3.62 on a 1 − 5 Likert scale on Oct 14th. Additionally, we achieved an average of 22.14 number of turns and a 5.22 minutes conversation duration.


Does anyone have a recording of the 9 minute conversation? Would love to hear how this sounded in person. Tried to find it on Youtube and Google, no luck.


Czech Technical University number 2!

The teams site http://alquistai.com


Prof. Zhou Yu explains her work on Situated Intelligent Interactive Systems here. You might find this useful.

https://www.youtube.com/watch?v=39T9ukI9HnQ


Is the used dataset with conversations available somewhere? What about the model architecture?


You can test a few of these by saying "Alexa, talk to me"


So, is 10 minutes enough to pass a Turing test, assuming speech counts equally to typed text? Or is that question like asking if a submarine can dog-paddle?


The Grand Challenge, of maintaining a coherent and engaging conversation for 20 minutes, still remains. As does a $1 million unrestricted gift which will be awarded to the winning team’s university, if their socialbot meets this challenge...


It would be nice to see more details on the method they used.


Have they made the architecture of this bot public ?


[flagged]


Why does this matter in the slightest? The challenge featured teams[0] from Sweden, Scotland, the Czech Republic, and the United States.

[0] https://developer.amazon.com/blogs/alexa/post/9f406f35-c997-...


In the US, ML/AI grad school has significant Chinese and Indian student population. It's not surprising to see this team composition.


> In the US, ML/AI grad school has significant Chinese and Indian student population. It's not surprising to see this team composition.

This is the biggest wealth and value US has over the rest of the world. US universities have historically attracted the best kids from around the world who learned the American way of life, innovated and built the country. The greatest proof of meritocracy is right here.

I really hope this keeps continuing.


> In the US, ML/AI grad school has significant Chinese and Indian student population

It is somewhat surprising to see that, though. I suspected something like this just from the sample of author names on papers I occasionally skim. I'm curious what's the story behind this? Why ML/AI in particular, instead of other branches of CS?


Who said other branches of CS don't experience this?


Or any stem subject.


Or any subject.

Or the whole population.

You see how ridiculous these questions are?


Rather, I think you were reading too much into the original statement. He talked about ML/AI because the threads are about ML/AI. He wasn't saying only ML/AI experienced this. His point was about ethnic population proportions and how it isn't strange that we see what we see. I wonder if the two of you actually agree with each other in reality.


These countries have big population and using AI universal basic income can be made possible back home.

This is why Chinese are acquiring AI expertise.


Why is this a relevant question? They got their education from the university so the university deserves partial credit no matter the nationality of each student involved.


The challenge only featured teams from universities, so their nationality has nothing to do with anything.


You could have asked so many good questions that would have make you smarter: What's their technical background? Can you explain the winning proposal in layman terms? Anything, but you had to go with the "R them really mericans?" question; I think people that the first thing they do is bring those kind of questions are (ironically?) what makes America much less than it could.


[flagged]


EDIT: Flagged. Thanks to masonic for teaching me how to use HN.

EDIT: I'm not sure how making judgements based on the ethnicity of the team members is "unintentional". Give me a break.


  It's a shame I can't flag this comment
Why not? You have enough karma. Just click on the time-ago link to drill down to the full set of links.


Thanks for the tip. I've only flagged stories in the past, not comments.


what judgement did i make? i am Asian myself


You insinuated that the team must be from China because a majority of the team members look Asian. I don't understand why you thought that was important enough to bring to attention.


> You insinuated that the team must be from China because a majority of the team members look Asian.

I didn't say they must be from china. I said I thought its a Chinese university that won it, looking at the thumbnail.

> I don't understand why you thought that was important enough to bring to attention.

Agree that it wasn't a comment i should've made or is relevant. I was about to delete it but couldn't before u responded.


I do not care how many chinese there. There should be at least 1/5 if one must be all equal. But the shock is where is the westerner (know this is American web site should I said American or just white).

I know chinese has been represented well in computer field. But this is odd. When I learn AI from afar, most are westerner




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: