Possibly stupid question, in general: why don't these papers have any dates? how do I know when it was published and whether it is up-to-date / still relevant?
Oh so this is a year old? That makes sense, some of the examples (like the SQL generation from human language) felt a little unexciting by today's standards but would have been more interesting last July.
This felt very shallow which may be because of the breadth of topics the authors tried to address. Data management has been building for years on the pillars of data quality, with a passion that is at times counterproductive (looking at you single ideal schema of truth) and I feel this failed to put enough emphasis on the gradual transition to new mechanics of trust as a counterweight to probabilistic answers. We are falling into the trap of imagining robotic brooms instead of vacuum cleaners. I don't see LLMs perfecting approaches with singular focus on precision but I do see them introducing new with focus on convenience (is that not what we are witnessing with evolution of search?)
> trust as a counterweight to probabilistic answers
The basic problem with that is that even with 5 9s of good results, that remainder still gets you very public cases of "put glue on your pizza". And we're not even at 5 9s yet.
People assume this is a completely absurd answer but there are times when it is correct to put glue on a pizza. The LLM picked this up because advertisers put glue on pizza to get that cheese pull shot. The next question is will LLMs ever develop enough understanding to distinguish these two cases. Personally I think LLMs will figure this out long before a bunch of ontologists stop bickering about it.
I've been seeing this a lot lately. Everywhere from screen printed signs to news tickers. I'm not sure if this is because it's new or if I'm just now seeing it.
Yeah, call me snobby or OCD but I lost confidence in the authors and reviewers when I saw that slipped through.
Then I started wondering if this is going to become the new anti-AI marker. AI-written papers use "delves", "underscores" and "showcasing" too much. Avoid those words, throw in some errors and readers will think your paper was written by humans.
Their probably just unaware of the affect they're words wood have on us. Its no big deal and we should just except it. I wouldn't altar a single word. They've been served there just deserts, and I would of maid the same mistake. Let sleeping dogs lay. They probably never past English class anyway, and your far two picky about these things.
I work at a company that manages docs, chat, and tasks (amongst many others), we of course use our product internally. AI search (chatgpt-like, you ask a question, it answers) was added a while ago.
My experience has been that it really is a huge improvement, you don't need to guess which words were used to describe the issue, you just describe your issue, tell the system what you want, and the results are there. Chat's been busy and you remember an issue was raised in a thread a week ago? With traditional search, good luck finding it. Now, I just write "Tom raised this week an issue in this chat about something not working with reminders??" and the results are there.
Someone at the company uses it to manage Dungeons and Dragons play nights, they document their world, their plays, etc, he wrote probably a smaller book worth of content, then can ask AI what happened 4 months ago with a characters instead of trying to search.
I wish something like that came to Discord. Their search is terrible and I could never find something if I didn't know the exact words, and the servers aren't indexed by external search engines either.