Hacker News new | past | comments | ask | show | jobs | submit | itsadok's comments login

The app this is advertising helps non-native speakers with their accent, I assume to sound more American. This is a great goal, and I'm sure there are a lot of people who would be willing to pay the $200-$300 yearly subscription cost. Apparently the AI part is not even the main function of the app, that's what the extra $100 are paying for[1].

I would be interested in an AI-only product that would help me learn to passably immitate various English accents, like Australian, Irish and so forth, for fun. I know that ChatGPT Voice can do accents pretty well, I've been wondering if it would also be able to help me with mine, but I haven't tried it seriously.

[1] https://www.boldvoice.com/frequently-asked-questions


I could absolutely see people be willing to pay for this. I am from the Midwest in the United States and I happened to be at an airport in some foreign country. Someone else heard me talking and they came up and asked me where I learned to speak English because it was so smooth. They were looking to get lessons to make their English better or at least more smooth. I thought their English was fine and they were a bit disappointed when I mentioned I was from the United States.


> The app this is advertising helps non-native speakers with their accent, I assume to sound more American.

Do people want to learn to speak English like a twangy guitar on purpose?


It's kind of annoying when services like this provide a free trial that you have to give a credit card number to even try, capitalizing on people forgetting to unsubscribe after trying.

Also, I'm very suspicious when a credit card form is on $site.com rather than $financial-institution.com


I would pay for an equivalent app that helped my German pronunciation


I would be interested in an equivalent app for Japanese.


In your assess_output_quality function, you ask the LLM to give a score first, then an explanation. I haven't been following the latest research on LLMs, but I thought you usually want the explanation first, to get the model to "think out loud" before committing to the final answer. Otherwise, it might commit semi-radndomly to some score, and proceed to write whatever explanation it can come up with to justify that score.


This is kind of like how http sites look totally fine in most browsers, but an https site with a self-signed certificate will cause a "DANGER! ENTER AT YOUR OWN PERIL" screen to be shown.


These were literally the first 3 questions on my review queue:

* https://stackoverflow.com/q/59344615 - A question by an absolute beginner, trying to do something that they have no clue how to start with. Already closed.

* https://stackoverflow.com/q/59341242 - A question about parsing a JSON response with jQuery. Two votes to close. The asker clearly does not know the word "parse".

* https://stackoverflow.com/q/57969318 - Someone trying to figure out an error message they're getting with kubernetes. This is exactly the kind of thing you get at the top of your google results when you hit the same error, and with one more vote to close, it will be forever locked with no useful information. Some asshole even downvoted the one answer that is there without adding any comments.

None of these questions are good, but they could be made better, and they all represent people with real problems that deserve help. Getting mad at people for "being lazy" (because if I, the expert, could easily find the answer to this, then why didn't you?!), is not productive.

Here's what I don't understand about all these SO deletionists: how is closing the question helpful in any way? If you don't find the question answerable, then don't answer it! But why block other people from trying to help? It's not like you're somehow "teaching" these people how to ask by blocking them. The user from question 59344615 (which got closed) did not post another question with better details. They just left the site, one more developer that doesn't have anywhere they can ask newbie questions. It sucks.


First one was aptly closed IMHO, much too vague. SO is not the right tool for absolute beginners to seek guidance when they have no idea what they're doing: the format asks for a reasonably specific answerable question. Would be nice if upon closing the asker was given pointers to beginner-friendly resources, though.

The other two are better, and (aptly) not closed. Of course you'll get inappropriate votes to close, but I hope they are correctly offset by other votes the (hopefully vast) majority of the time.

Now about this:

> how is closing the question helpful in any way?

I suppose it's to stay focused. When googling I quite often get useful SO results (& upvote those), and I'm happy not having to sift through tons of useless questions.


All 3 are closed now.


One of the irritating things about SO is people who don't read the question before answering it[0]. As you apparently haven't read mine.

Are these your questions, as I asked? Of course you can find such problems, that's not what I was asking. But let's go on.

As to your questions, the first has been closed, and quite correctly as it's too vague.

The 2nd is not closed, and has answers. Problem?

The 3rd is not closed, and has an answer (albeit downvoted unhelpfully as you note). But the answer's there.

Not your questions anyway, and 1 closed correctly and 2 still open and with answers.

I don't accept this as strong evidence of wrongdoing.

[0] edit: ahem, not that I'm in a position to accuse others of this at times.


The last time I evaluated GitLab, you needed the paid version to use features that I considered pretty basic, like merge request approvals and multiple code reviewers[1]. I'm wondering now if the GNOME people consider these unnecessary, or if I misunderstood what was possible with self hosting.

[1] https://about.gitlab.com/pricing/


I wonder about the exact same thing. Some time ago I compared the features of GitLab with similar solutions and found that the OSS version is pretty limited. If you want all the nice features for big projects it is more expansive than the Atlassian stack.


Are there any downsides to multiple-licensing? What happens if I say "This software is released under every license listed in https://opensource.org/licenses/alphabetical"?

Just curious.


IANAL but I'd guess it depends on the wording. If it's an "AND" (as your question suggests), the most restrictive would apply while if it's an "OR" (as it's common when dual-licensing) the less restrictive would apply.


> If it's an "AND" (as your question suggests), the most restrictive would apply

Please note that it is not possible to do this with some License X and GNU [LA]GPL. Because it is an additional restriction which is not allowed as per the terms of GNU [LA]GPL.


Not OP, but some cool features I haven't seen in Linux:

- Quadruple-click smart selection

- "Selection respects soft boundaries" for selecting within a tmux pane

- "Rum coprocess" on keyboard shortcut, which allows doing stuff like http://brettterpstra.com/2014/11/14/safer-command-line-paste...

I just checked, and gnome-terminal doesn't even have split-pane. What do you guys use, anyway?


Quadruple-click smart selection should be doable with urxt's perl extension feature.

Selecting within a tmux pane: you got me on that one. I'm not aware of a terminal that does that, but again, it should be possible to do with urxvt's perl extension feature.

Regarding the "safer command line paste", Linux can do the same with bracketed paste mode.[1]

For splitting panes, I just use tmux, vim, or emacs.

[1] - https://cirw.in/blog/bracketed-paste


While people still drive themselves, there's a huge incentive to come to work early or late, to avoid traffic. I don't see that incentive increasing with autonomous cars. If anything, I predict that more people will join rush hour traffic if they don't need to do any actual driving. However, this might be mitigated by the better driving of autonomous cars.


This is similar to the "Contamination" effect, of how completely false statements can affect a person's judgement, even if they are told the statement are false.

http://lesswrong.com/lw/k3/priming_and_contamination/ http://lesswrong.com/lw/k4/do_we_believe_everything_were_tol...


I don't know, the Contamination effect seems more like buffer overflow in our neural network and the Repetition effect more like a poor implementation of caching.


I'm pretty sure everything you used is already supported: http://buildbot.pypy.org/numpy-status/latest.html


Thanks. I did look at that page recently, and felt Numpy support was still experimental. But I'll give it a fair try.

What would motivate me would be that Pypy supports the packages I need for my day job, including Pandas, Scipy, and Scikit-learn. Do you know if there are plans to have these on top of Pypy?


http://lostinjit.blogspot.com/2015/11/python-c-api-pypy-and-...

tl;dr, Yes there are plans. But funding is needed.


Does PyPy work with Python 3?

Python 3 has been out for 7 years and I refuse to use anything that doesn't work in Python 3, it's just ridiculous to keep building stuff for Python 2, it's hindering the language and keeping it back in the past.



Only Python 3.2 (so no "yield from" or async stuff).


Note also jitpy for those times when they're not:

http://jitpy.readthedocs.org/en/latest/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: