Hacker News new | past | comments | ask | show | jobs | submit | teqsun's comments login


Doesn't the same concern apply to your passport?


Passports don't spontaneously break if they're dropped in water or left in a hot car for hours.


I guess the term would be "conspicuous consumption".

As to why Koenigsegg doesn't get the rep, I'll take the outside opinion that it's because their name is too inaccessible whereas "Bugatti" slips easily into rap lyrics.


What would you call "corp dev type wages"?


Starting salaries around $70-$80K, senior salaries around $140K-$160K and “architects” around $175K.

This is in the Atlanta market where I use to live before I started working remotely and moved. I got this email from a recruiter there. It’s about the same in most major non west coast markets

(Starting on page 26) https://motionrecruitment.com/hubfs/TSG-25/Atlanta-IT-Salary...

Compare that to any of the BigTech or adjacent companies where entry level developers are getting return offers from their internships of $150K-$175K.


Exactly. I feel constant ascendancy and the opportunity to do so would be a fortunate thing.

For OP, I feel there might be qualifiers that preclude such opportunities, like total comp or ___location. If you're looking for comp above the mid 100s or very low 200s at max, you're gonna struggle to find jobs that meet your criteria. On the converse side, these opportunities can be found in mid/low COL locations, so that number goes a lot farther.


This is the main reason why I have skepticism towards any claims of imminent AGI.


I'm also sceptical but to be fair, we don't need to understand how the brain works for AGI, that's just one (obvious) path.


Since we don't yet have anything remotely like AGI and at the same time, don't even really know how the brain works or what consciousness is aside from being aware that we feel it, you and nobody else really know if our path to consciousness is just one of many. For all we know it might be the only one. There could be some very big unknown unknowns in those waters.


I would say I think those are all pretty good observations, all perfectly true and I think AGI is more plausible than it's ever been in light of what we've demonstrated via LLMs. Our understanding really is that limited but I don't take it to be a counterpoint to the prospect of AGI.


> what consciousness is

Probably an unavoidable property that emerges from the sheer and perverse complexity of the human brain, its 100 billion connections and the unimaginable amount of interactions between them, modulated by neurotransmitters + their reuptake, length, amount, ___location, quality and condition of pre/post-synaptic receptors, axons and other nano structures in the brain...

Comparing this disgusting moist, fleshy and electric masterpiece of nature with something primitive like a """neuronal""" network or LLM was always ridiculous.


> Probably an unavoidable property that emerges from the sheer and perverse complexity of the human brain

I don't think complexity alone leads to consciousness. Rather, consciousness is the mechanism by which information is integrated in a relatively resource-cheap fashion: dump all this info into a shared mental workspace, add some reflective awareness of that info/workspace, and bam, you can both process the information in an integrated way, and you're also now aware of it all. The higher-level information processing and the awareness go hand-in-hand.

There're good reasons why most of the leading theories of consciousness focus on information integration. (Baars' Global Workspace Theory, Graziano's Attention Schema Theory, Tononi's Integrated Information Theory, etc.)


I don't see how the sensations of color, sound, etc come out of information integration. Sensory experience along with internal sensations form the hard problem of consciousness.


> Probably an unavoidable property that emerges from the sheer and perverse complexity of the human brain

Complexity alone causing consciousness, much like an infinite amount of monkeys coming up with Shakespeare's complete works, is a far flung pipe dream.

The hardware doesn't guarantee the software, but the software can't exist without the appropriate hardware is probably a better way to look at it.

There's so much life out there that survives just fine without consciousness that it seems like a very narrow stroke of luck that it occurred at all.


Who knows? Maybe it’ll turn out like flying and birds.

Studying birds gave us some data, but mimicking them wasn’t what got us jumbo jets.


Sort of a shame... I always wanted to fly in an ornithopter.


Not trying to be sassy but what definition of AGI are you using? I've never seen a concrete goal, just vague stuff like "better than humans at a wide range of tasks." Depending on which tasks you include and what percentage of humans you need to beat, we could be already there or maybe never will be. Several of these tests [1] have been passed, some appear reasonably tractable. Like if Boston Dynamics cared about the Coffee Test I bet they could do it this year.

[1] https://en.wikipedia.org/wiki/Artificial_general_intelligenc...


> I've never seen a concrete goal, just vague stuff like "better than humans at a wide range of tasks."

I think you're pointing out a bit of a chicken vs. the egg situation here.

We have no idea how intelligence works and I expect this will be the case until we create it artificially. Because we have no idea how it works, we put out a variety of metrics that don't measure intelligence but approximate something that only an intelligent thing could do (we think). Then engineers optimize their ML systems for that task, we blow by the metric, and everyone is left feeling a bit disappointed by the fact that it still doesn't feel intelligent.

Neuroscience has plenty of theories for how the brain works but lacks the ability to validate them. It's incredibly difficult to look into a working brain (not to mention deeply unethical) with the necessary spatial and temporal resolution.

I suspect we'll solve the chicken vs. egg situation when someone builds an architecture around a neuroscience theory and it feels right or neuroscientists are able to find evidence for some specific ML architecture within the brain.


I get what you're saying, but I think "boiling frog" is more applicable than "chicken v egg."

You mention that people feel disappointed by ML systems because they don't feel intelligent. But I think that's just because they emerged one step at a time, and each marginal improvement doesn't blow your socks off. Personally, I'm amazed by a system that can answer PhD level questions across all disciplines, pass the Turing Test, walk me through DIY plumbing, etc etc, all at superhuman speed. Do we need neuroscience to progress before we call these things intelligent? People are polite to ChatGPT because it triggers social cues like a human. Some, for better or worse, get in full-blown relationships with an AI. Doesn't this mean that it "feels" right, at least for some?

We already know that among humans there are different kinds of intelligence. I'm reminded of the problem with standardized testing - kids can be monkeys or fish or iguanas and we evaluate tree climbing ability. We're making the same mistake by evaluating computer intelligence using human benchmarks. Put another way: it's extremely vain to say a system needs to be human-like in order to be called intelligent. Like if aliens visited us with incomprehensibly advanced technology we'd be forced to conclude they were intelligent, despite knowing absolutely nothing about how their intelligence works. To me that's proof by (hypothetical) example that we can call something intelligent based on capability, not at all conditional on internal mechanism.

Of course that's just my two cents. Without a strict definition of AGI there's no way to achieve it, and right now everyone is free to define it how they want. I can see the argument that to define AGI you have to first understand I (heh), but I think that's putting an unfair boundary around the conversation.


HN's title character limit is 80 (and the title is exactly 80 characters), so it was abbreviated to fit.


It was abbreviated incorrectly and it is ambiguous or even misleading in its current form.

> Otherwise please use the original title, unless it is misleading..


CA would have made more sense as its an actual acronym.


CA isn't an acronym, it's a postal abbreviation.

Internationally, it conflicts with Canada.


That's a bit US-centric.


I would vote for:

"California AG to AI Corps: Practically Everything You’re Doing Might Be Illegal"


Tiktok is popular on a global level. They'll just block access to US users with a link to the details of the ban, and let things stew up the heat until the US budges.


It's not being thrown away, it will work as normal in every other country except the United States.


I wonder how many countries you need to have to match the united states' revenue


To quote directly from the linked article:

"The law’s supporters have, at times, minimized the ban’s impact on the First Amendment, citing the mistaken belief that TikTok users can simply move to another platform. From a constitutional perspective, this is nonsense. The government can’t justify shutting down The Washington Post because readers can simply buy The New York Times instead."


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: