Hacker News new | past | comments | ask | show | jobs | submit | throwawayffffas's comments login

Since you are in a mature YC company, I assume you guys have an immigration lawyer. If not, get one. If it's not a lawyer you are looking for, ask your founders to talk to YC I am sure YC people can get time with the administration.

I agree.

@OP: Take a look at https://news.ycombinator.com/submitted?id=proberts and you may want to contact dang [email protected] . But in this case it may be better to ask [email protected] (or whatever is the secret email for founders).


What about AMD card?

CUDA.

The website has an ".ai" ___domain. It's about people wanting to run inference, and maybe mine cryptocurrency and for some reason only NVIDIA cards are used for that.

You can run inference on AMD cards, ROCm[1] is a thing. I am running inference on amd cards locally.Plus the highest performing cards for computational workloads are AMD's[2] of course you can't buy these on amazon.

1. https://rocm.docs.amd.com/en/latest/index.html 2. https://www.amd.com/en/products/accelerators/instinct/mi300....


Some reason: CUDA

The content quality is not human level yet, but it's still a work in progress. What you are seeing is like a couple of nights spent playing with LLM prompts.


Sadly I'd say a lot is human level, but not in a good way.


Care to elaborate? A lot? In what way? I'd like to know.


If you leave out every adjective they read better.

"The walls of Earth’s Spaceport Alpha hummed with a low thrum – the sound of power, of humanity’s ambition made tangible. Rain lashed against the windows, blurring the cityscape into a wash of grey and amber. Inside, the air was thick with the scent of disinfectant."


Hm fair, it might be overdoing it with the adjectives.


You can never have too few!


So we are well on our way to turning python to PHP.

Edit: Sorry I was snarky, its late here.

I already didn't like f-strings and t-strings just add complexity to the language to fix a problem introduced by f-strings.

We really don't need more syntax for string interpolation, in my opinion string.format is the optimal. I could even live with % just because the syntax has been around for so long.

I'd rather the language team focus on more substantive stuff.


> turning python to PHP.

Why stop there? Go full Perl (:

I think Python needs more quoting operators, too. Maybe qq{} qq() q// ...

[I say this as someone who actually likes Perl and chuckles from afar at such Python developments. May you get there one day!]


Quoting operators are something I actually miss in Python whereas t-strings are something I have never wanted in 17 years of writing Python.


What's the issue with f-strings? I'm wondering because I thought they basically had no downside versus using the older alternatives. I use them so often that they are very substantive to me. If anything, this is exactly what python should be focusing on, there really isn't a lot more that they can do considering the design, expectations, and usage of python.


In the motivation for the t-string types, their gripe is that f-strings are not templates.

My issue with them is that you have to write your syntax in the string complex expressions dictionary access and such become awkward.

But, this whole thing is bike-shedding in my opinion, and I don't really care about the color of the bike shed.


Pretty sure PHP does not have this feature. Can you give me an example?


I believe that jab was that PHP has a bunch of ways to do similar things and Python, in their view, is turning out that way, too.


On a more philosophical level php is this feature. At least as it was used originally and how it's mostly used today. PHP was and is embedded in html code. If you have a look at a wordpress file you are going to see something like this:

<?php ... ?><some_markup>...<? php ... ?><some_more_markup here>...


string.format and string substitution are bloat and annoying to use, while f-strings makes it very easy to improve readability. So in the end, they remove big complexity in usage, by adding very little and straightforward complexity in syntax.


In my view as well it's not really cheating, it's just over fitting.

If a model doesn't do good in the benchmarks it will either be retrained until it does or you won't hear about it.


I agree, about both the issue with benchmarks not being relevant to actual use cases and the "wants to sound smart" issue. I have seen them both first hand interacting with llms.

I think the ability to embed arbitrary knowledge written in arbitrary formats is the most important thing llms have achieved.

In my experience trying to get an llm to perform a task as vast and open ended as the one the author describes is fundamentally misguided. The llms were not trained for that and won't be able to do it in a satisfactory degree. But all this research has thankfully provided us with the software and hardware tools where one could start working on training a model that can.

Contrast that to 5-6 years ago, when all you could hope for this kind of thing was simple rule based and pattern matching systems.


A major requirement to be a blue zone is spotty record keeping in the beginning of the 20th century.


And government benefits which kick in no questions asked in old age


How much of the work did you do yourself, as opposed to the ai?


Everything was coded by AI.


It transfers money from xAI to himself. We are going to acquire a company that has petabytes(more?) of real content sounds a lot better to investors than, actually I will take all the money you put in this company and keep it in this pocket right here.


But he funded X on debt as well as his own money I believed so he's crystallised a loss. $14b disappeared.


The loss means he doesn't have to pay taxes on the money going into his pocket. In the meantime he still keeps X just under a different corporate umbrella.


Have your tried running base models? I would try a base model instead of an instruct model it. I would prompt it like this:

# Grammar correction .... :input:A couple of sentences from your text here:input: :output:

And see what it fills in after the output.

>They always tend to add or remove sentences of there own.

You may be running into context size issues here. Try going small a sentence at a time. And using a new chat for each sentence.

btw: When I am saying a base model, I mean try using it in a text generation mode not a chat mode.

edit: There are models specifically trained for grammar correction though, for the multilingual case you may have to train one. See a link to an explanation of how someone did it for a google model from 2019: https://deeplearninganalytics.org/nlp-building-a-grammatical...


Right, it seems some models are on hugging face for that:

https://huggingface.co/grammarly/coedit-xl

I'm gonna check those further


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: