The app this is advertising helps non-native speakers with their accent, I assume to sound more American. This is a great goal, and I'm sure there are a lot of people who would be willing to pay the $200-$300 yearly subscription cost. Apparently the AI part is not even the main function of the app, that's what the extra $100 are paying for[1].
I would be interested in an AI-only product that would help me learn to passably immitate various English accents, like Australian, Irish and so forth, for fun. I know that ChatGPT Voice can do accents pretty well, I've been wondering if it would also be able to help me with mine, but I haven't tried it seriously.
I could absolutely see people be willing to pay for this. I am from the Midwest in the United States and I happened to be at an airport in some foreign country. Someone else heard me talking and they came up and asked me where I learned to speak English because it was so smooth. They were looking to get lessons to make their English better or at least more smooth. I thought their English was fine and they were a bit disappointed when I mentioned I was from the United States.
It's kind of annoying when services like this provide a free trial that you have to give a credit card number to even try, capitalizing on people forgetting to unsubscribe after trying.
Also, I'm very suspicious when a credit card form is on $site.com rather than $financial-institution.com
In your assess_output_quality function, you ask the LLM to give a score first, then an explanation. I haven't been following the latest research on LLMs, but I thought you usually want the explanation first, to get the model to "think out loud" before committing to the final answer. Otherwise, it might commit semi-radndomly to some score, and proceed to write whatever explanation it can come up with to justify that score.
This is kind of like how http sites look totally fine in most browsers, but an https site with a self-signed certificate will cause a "DANGER! ENTER AT YOUR OWN PERIL" screen to be shown.
These were literally the first 3 questions on my review queue:
* https://stackoverflow.com/q/59344615 - A question by an absolute beginner, trying to do something that they have no clue how to start with. Already closed.
* https://stackoverflow.com/q/59341242 - A question about parsing a JSON response with jQuery. Two votes to close. The asker clearly does not know the word "parse".
* https://stackoverflow.com/q/57969318 - Someone trying to figure out an error message they're getting with kubernetes. This is exactly the kind of thing you get at the top of your google results when you hit the same error, and with one more vote to close, it will be forever locked with no useful information. Some asshole even downvoted the one answer that is there without adding any comments.
None of these questions are good, but they could be made better, and they all represent people with real problems that deserve help. Getting mad at people for "being lazy" (because if I, the expert, could easily find the answer to this, then why didn't you?!), is not productive.
Here's what I don't understand about all these SO deletionists: how is closing the question helpful in any way? If you don't find the question answerable, then don't answer it! But why block other people from trying to help? It's not like you're somehow "teaching" these people how to ask by blocking them. The user from question 59344615 (which got closed) did not post another question with better details. They just left the site, one more developer that doesn't have anywhere they can ask newbie questions. It sucks.
First one was aptly closed IMHO, much too vague. SO is not the right tool for absolute beginners to seek guidance when they have no idea what they're doing: the format asks for a reasonably specific answerable question. Would be nice if upon closing the asker was given pointers to beginner-friendly resources, though.
The other two are better, and (aptly) not closed. Of course you'll get inappropriate votes to close, but I hope they are correctly offset by other votes the (hopefully vast) majority of the time.
Now about this:
> how is closing the question helpful in any way?
I suppose it's to stay focused. When googling I quite often get useful SO results (& upvote those), and I'm happy not having to sift through tons of useless questions.
The last time I evaluated GitLab, you needed the paid version to use features that I considered pretty basic, like merge request approvals and multiple code reviewers[1]. I'm wondering now if the GNOME people consider these unnecessary, or if I misunderstood what was possible with self hosting.
I wonder about the exact same thing. Some time ago I compared the features of GitLab with similar solutions and found that the OSS version is pretty limited. If you want all the nice features for big projects it is more expansive than the Atlassian stack.
IANAL but I'd guess it depends on the wording. If it's an "AND" (as your question suggests), the most restrictive would apply while if it's an "OR" (as it's common when dual-licensing) the less restrictive would apply.
> If it's an "AND" (as your question suggests), the most restrictive would apply
Please note that it is not possible to do this with some License X and GNU [LA]GPL. Because it is an additional restriction which is not allowed as per the terms of GNU [LA]GPL.
Quadruple-click smart selection should be doable with urxt's perl extension feature.
Selecting within a tmux pane: you got me on that one. I'm not aware of a terminal that does that, but again, it should be possible to do with urxvt's perl extension feature.
Regarding the "safer command line paste", Linux can do the same with bracketed paste mode.[1]
For splitting panes, I just use tmux, vim, or emacs.
While people still drive themselves, there's a huge incentive to come to work early or late, to avoid traffic. I don't see that incentive increasing with autonomous cars. If anything, I predict that more people will join rush hour traffic if they don't need to do any actual driving. However, this might be mitigated by the better driving of autonomous cars.
This is similar to the "Contamination" effect, of how completely false statements can affect a person's judgement, even if they are told the statement are false.
I don't know, the Contamination effect seems more like buffer overflow in our neural network and the Repetition effect more like a poor implementation of caching.
Thanks. I did look at that page recently, and felt Numpy support was still experimental. But I'll give it a fair try.
What would motivate me would be that Pypy supports the packages I need for my day job, including Pandas, Scipy, and Scikit-learn. Do you know if there are plans to have these on top of Pypy?
Python 3 has been out for 7 years and I refuse to use anything that doesn't work in Python 3, it's just ridiculous to keep building stuff for Python 2, it's hindering the language and keeping it back in the past.
I would be interested in an AI-only product that would help me learn to passably immitate various English accents, like Australian, Irish and so forth, for fun. I know that ChatGPT Voice can do accents pretty well, I've been wondering if it would also be able to help me with mine, but I haven't tried it seriously.
[1] https://www.boldvoice.com/frequently-asked-questions