Since you are in a mature YC company, I assume you guys have an immigration lawyer. If not, get one. If it's not a lawyer you are looking for, ask your founders to talk to YC I am sure YC people can get time with the administration.
The website has an ".ai" ___domain. It's about people wanting to run inference, and maybe mine cryptocurrency and for some reason only NVIDIA cards are used for that.
You can run inference on AMD cards, ROCm[1] is a thing. I am running inference on amd cards locally.Plus the highest performing cards for computational workloads are AMD's[2] of course you can't buy these on amazon.
The content quality is not human level yet, but it's still a work in progress. What you are seeing is like a couple of nights spent playing with LLM prompts.
If you leave out every adjective they read better.
"The walls of Earth’s Spaceport Alpha hummed with a low thrum – the sound of power, of humanity’s ambition made tangible. Rain lashed against the windows, blurring the cityscape into a wash of grey and amber. Inside, the air was thick with the scent of disinfectant."
So we are well on our way to turning python to PHP.
Edit: Sorry I was snarky, its late here.
I already didn't like f-strings and t-strings just add complexity to the language to fix a problem introduced by f-strings.
We really don't need more syntax for string interpolation, in my opinion string.format is the optimal. I could even live with % just because the syntax has been around for so long.
I'd rather the language team focus on more substantive stuff.
What's the issue with f-strings? I'm wondering because I thought they basically had no downside versus using the older alternatives. I use them so often that they are very substantive to me. If anything, this is exactly what python should be focusing on, there really isn't a lot more that they can do considering the design, expectations, and usage of python.
On a more philosophical level php is this feature. At least as it was used originally and how it's mostly used today. PHP was and is embedded in html code. If you have a look at a wordpress file you are going to see something like this:
string.format and string substitution are bloat and annoying to use, while f-strings makes it very easy to improve readability. So in the end, they remove big complexity in usage, by adding very little and straightforward complexity in syntax.
I agree, about both the issue with benchmarks not being relevant to actual use cases and the "wants to sound smart" issue. I have seen them both first hand interacting with llms.
I think the ability to embed arbitrary knowledge written in arbitrary formats is the most important thing llms have achieved.
In my experience trying to get an llm to perform a task as vast and open ended as the one the author describes is fundamentally misguided. The llms were not trained for that and won't be able to do it in a satisfactory degree. But all this research has thankfully provided us with the software and hardware tools where one could start working on training a model that can.
Contrast that to 5-6 years ago, when all you could hope for this kind of thing was simple rule based and pattern matching systems.
It transfers money from xAI to himself. We are going to acquire a company that has petabytes(more?) of real content sounds a lot better to investors than, actually I will take all the money you put in this company and keep it in this pocket right here.
The loss means he doesn't have to pay taxes on the money going into his pocket. In the meantime he still keeps X just under a different corporate umbrella.
Have your tried running base models? I would try a base model instead of an instruct model it. I would prompt it like this:
# Grammar correction ....
:input:A couple of sentences from your text here:input:
:output:
And see what it fills in after the output.
>They always tend to add or remove sentences of there own.
You may be running into context size issues here. Try going small a sentence at a time. And using a new chat for each sentence.
btw: When I am saying a base model, I mean try using it in a text generation mode not a chat mode.
edit: There are models specifically trained for grammar correction though, for the multilingual case you may have to train one. See a link to an explanation of how someone did it for a google model from 2019: https://deeplearninganalytics.org/nlp-building-a-grammatical...
reply