Hacker News new | past | comments | ask | show | jobs | submit | csto12's comments login

Truly the party of small government and personal freedoms :)


Was that ever true? At least over the last couple of decades those mostly seem to translate to:

- Small government: cut things we don't like (e.g. social programs), and spend more on things we do like (e.g. military)

- Personal freedoms: more freedoms for things we like (e.g. guns), remove freedoms for things we don't (e.g. LGBTQ)


Republicans are the party of white Christian conversatives, so whatever message is most expedient to appealing to them at the moment is what they stand for.

"Small government" meant "get the Black President out of my healthcare." "Personal freedoms" meant "let me discriminate against people."

Never take a Republican at face value, especially if you're not in their in group. Get them alone and they'll tell you what they mean behind what they say.


Never take propaganda from large organizations at face value.


> Small government: cut things we don't like (e.g. social programs)

Yeah, and guess how? By claiming the program is rife with abuse, demanding all sorts of record-keeping and auditing...and then a few years later shouting blue-bloody-murder about "administrative cost" in the program.

I wonder what the actual stats are for TANF and SNAP in terms of paper-pushing and auditing vs funds dispersed to recipients.

> remove freedoms for things we don't (e.g. LGBTQ)

Or the really big one: abortion. Doing things like passing legislation that forces doctors to say certain things to their patients, for example...and mandate medical procedures like forcing the mother to go through an ultrasound so they have to see the fetus and if it's old enough, listen to its heart.

Can you imagine how much outrage there would be if democrats passed legislation mandating doctors tell their patients that the vast overwhelming majority of scientific evidence supports efficacy of vaccines, and oh by the way, flu shots are now compulsory? They'd lose their goddamn minds and riot in the streets (er...again?)


Not to lessen your point, because I 100% agree, but I'd like to point out that you could swap a couple words in your statements to make the same point about the Democrats:

- Small government: cut things we don't like (e.g. military), and spend more on things we do like (e.g. social programs)

- Personal freedoms: more freedoms for things we like (e.g. LGBTQ), remove freedoms for things we don't (e.g. guns)


The crucial missing element is Republicans identifying as "a party of small government".


Sure fine but they describe themselves as the “party of personal freedoms” so the same applies, in my mind.

The point being neither party is anywhere close to being a party of either thing. There are giant “plot holes” in both their platforms.


> party of either thing

Democrats don't say party of small gov, only republicans say that. Democrats want more regulations instead.

Democrats want freedom but not freedom that give rights that kills kids.


Thank you for demonstrating the hilarious insanity of bothside-ism by jokingly equating people running around carrying guns with people who do not identify as heterosexual, made me spill my Coke. You are joking, right?


I ran into this problem when I was a smart ass early 20 something on 4chan. Those were fun times being sarcastic and hyperbolic for effect. Up until I realized that it wasn't just edgy satire for a good chunk of the crowd. They actually believed a lot of the ridiculous shit we were saying. Then I learned about Poe's law and cut WAY back on my online shitheadedness. But no... Unfortunately a lot of folks who say crazy stuff believe it.


Since when has either party ever cut military spending? I wish Dems were as cool as you say.


Exactly. I was looking this up and only saw a couple of failed attempts at cutting military budgets since the 80s.

One such "cut" was only increasing defense spending by 4% instead of 10%.


Clinton/Gore.

https://en.m.wikipedia.org/wiki/Economic_policy_of_the_Bill_...

Also the only time the US had a budget surplus (1998-2001) in recent times.


Thanks so much. I was positive I was missing something there.


No one is banning guns at the state level, but they are with abortion


Maybe in the past. Now, it's just the party of "whatever DJT says, goes".


Even in the past it was nothing but a coded language. They don't actually believe in it from a principled point of view. Lee Atwater 1981 interview has continued to remain relevant. From direct racial slurs to forced busing, states' rights. Then it morphed into small government, personal freedoms. And now it's DEI and trans.


Remember in the 90s, Newt Gingrich would speak in hallowed tones about the sanctity of the rule of law on Rush Limbaugh. All bullshit.


No illegal Russians are being sent to Mexico


They never claimed to be the party of personal freedom. There's a libertarian contingent within the GOP that wishes they could persuade people to go that direction, but unsuccessfully for decades.

They have claimed to be the party of small government. And even someone who disagrees with them can recognize the "small government" within their idealized view means government that is only involved in the things that government should be involved in. It doesn't necessarily (or in practice ever) mean less spending.


> And even someone who disagrees with them can recognize the "small government" within their idealized view means government that is only involved in the things that government should be involved in.

Sure, maybe if they were ever ideologically consistent. Yet somehow “government should not be involved in healthcare” also means “government can dictate your healthcare decisions” vis a vis gender affirming care and abortions. Or how “government should not be involved in wealth redistribution” means “let’s grow the national debt to give billionaires more tax breaks and subsidies”.

This is totally setting aside the fact that small government has always carried the connotation of fiscal conservatism.


The inherent contradiction in the modern Republican party is that it's a blend of Christian conservative morality with libertarian economics.

That works... until a policy area straddles both areas: abortion, free trade, etc.


I feel like American democrats are leaving Christian votes on the table. Here is a party programme of center left evangelical party from Europe [1]. For example, I find it brilliant that they rebranded the environment to the Creation.

[1] https://insite.christenunie.nl/l/library/download/urn:uuid:9...


I don't think the hypocrisy has bothered them for quite some time. By "personal freedom", they mean the freedom for themselves to personally oppress others - not a society based upon widespread individual liberty. This is very apparent when a blatant violation of constitutional freedoms happens to someone in an "othered" group (eg Kenneth Walker's 2nd amendment rights), and they line right up in support of the oppressors.


It’s basically the party of narcissism. Which is why Trump has succeeded. Freedoms insofar as their world and life are concerned. Generally not an externally motivated “hey they need to be free too” unless they can somehow appear morally superior in a US Christian way, like abortion or prootecting marriage.


You just asked it to design or implement?

If o3 can design it, that means it’s using open source schedulers as reference. Did you think about opening up a few open source projects to see how they were doing things in those two weeks you were designing?


why would I do that kind of research if it can identify the problem I am trying to solve, and spit out the exact solution. also, it was a rough implementation adapted to my exact tech stack


Because that path lies skill atrophy.

AI research has a thing called "the bitter lesson" - which is that the only thing that works is search and learning. Domain-specific knowledge inserted by the researcher tends to look good in benchmarks but compromise the performance of the system[0].

The bitter-er lesson is that this also applies to humans. The reason why humans still outperform AI on lots of intelligence tasks is because humans are doing lots and lots of search and learning, repeatedly, across billions of people. And have been doing so for thousands of years. The only uses of AI that benefit humans are ones that allow you to do more search or more learning.

The human equivalent of "inserting ___domain-specific knowledge into an AI system" is cultural knowledge, cliches, cargo-cult science, and cheating. Copying other people's work only helps you, long-term, if you're able to build off of that into something new; and lots of discoveries have come about from someone just taking a second look at what had been considered to be generally "known". If you are just "taking shortcuts", then you learn nothing.

[0] I would also argue that the current LLM training regime is still ___domain-specific knowledge, we've just widened the ___domain to "the entire Internet".


Here on HN you frequently see technologists using words like savant, genius, magical, etc, to describe the current generation of AI. Now we have vibe coding, etc. To me this is just a continuation of StackOverflow copy/paste where people barely know what they are doing and just hammer the keyboard/mouse until it works. Nothing has really changed at the fundamental level.

So I find your assessment pretty accurate, if only depressing.


It is depressing but equally this presents even more opportunities for people that don't take shortcuts. I use Claude/Gemini day to day and outside of the most average and boring stuff they're not very capable. I'm glad I started my career well before these things were created.


> Because that path lies skill atrophy.

Maybe, but I'm not completely convinced by this.

Prior to ChatGPT, there would be times where I would like to build a project (e.g. implement Raft or Paxos), I write a bit, find a point where I get stuck, decide that this project isn't that interesting and I give up and don't learn anything.

What ChatGPT gives me, if nothing else, is a slightly competent rubber duck. It can give me a hint to why something isn't working like it should, and it's the slight push I need to power through the project, and since I actually finish the project, I almost certain learn more than I would have before.

I've done this a bunch of times now, especially when I am trying to directly implement something directly from a paper, which I personally find can be pretty difficult.

It also makes these things more fun. Even when I know the correct way to do something, there can be lots of tedious stuff that I don't want to type, like really long if/else chains (when I can't easily avoid them).


I agree. AI has made even mundane coding fun again, at least for a while. AI does a lot of the tedious work, but finding ways to make it maximally do it is challenging in a new way. New landscape of possibilities, innovation, tools, processes.


Yeah that's the thing.

Personal projects are fun for the same reason that they're easy to abandon: there are no stakes to them. No one yells at you for doing something wrong, you're not trying to satisfy a stakeholder, you can develop into any direction you want. This is good, but that also means it's easy to stop the moment you get to a part that isn't fun.

Using ChatGPT to help unblock myself makes it easier for me to not abandon a project when I get frustrated. Even when ChatGPT's suggestions aren't helpful (which is often), it can still help me understand the problem by trying to describe it to the bot.


true and with AI I can look into far more subjects more quickly because the skill that was necessary was mostly just endless amounts of sifting through documentation and trying to find out why some error happens or how to configure something correctly. But this goes even further, it also applies to subjects where I couldn't intellectually understand something but there was noone to really ask for help. So I'm learning knowledge now that I simply couldn't have figured out on my own. It's a pure multiplier and humans have failed to solve the issue of documentation and support for one another. Until now of course.

I also think that once robots are around it will be yet another huge multiplier but this time in the real world. Sure the robot won't be as perfect as the human initially but so what. You can utilize it to do so much more. Maybe I'll bother actually buying a rundown house and renovating myself. If I know that I can just tell the robot to paint all the walls and possibly even do it 3 times with different paint then I feel far more confident that it won't be an untenable risk and bother.


>Because that path lies skill atrophy.

I wonder how many programmers have assembly code skill atrophy?

Few people will weep the death of the necessity to use abstract logical syntax to communicate with a computer. Just like few people weep the death of having to type out individual register manipulations.


I would say there's a big difference with AI though.

Assembly is just programming. It's a particularly obtuse form of programming in the modern era, but ultimately it's the same fundamental concepts as you use when writing JavaScript.

Do you learn more about what the hardware is doing when using assembly vs JavaScript? Yes. Does that matter for the creation and maintenance of most software? Absolutely not.

AI changes that, you don't need to know any computer science concepts to produce certain classes of program with AI now, and if you can keep prompting it until you get what you want, you may never need to exercise the conceptual parts of programming at all.

That's all well and good until you suddenly do need to do some actual programming, but it's been months/years since you last did that and you now suck at it.


Most programmers don't need to develop that skill unless they need more performance or are modifying other people's binaries[0]. You can still do plenty of search-and-learning using higher-level languages, and what you learn at one particular level can generalize to the other.

Even if LLMs make "plain English" programming viable, programmers still need to write, test, and debug lists of instructions. "Vibe coding" is different; you're telling the AI to write the instructions and acting more like a product manager, except without any of the actual communications skills that a good manager has to develop. And without any of the search and learning that I mentioned before.

For that matter, a lot of chatbots don't do learning either. Chatbots can sort of search a problem space, but they only remember the last 20-100k tokens. We don't have a way to encode tokens that fall out of that context window into some longer-term weights. Most of their knowledge comes from the information they learned from training data - again, cheated from humans, just like humans can now cheat off the AI. This is a recipe for intellectual stagnation.

[0] e.g. for malware analysis or videogame modding


Because as far as you know, the "rough implementation" only works in the happy path and there are really bad edge cases that you won't catch until they bite you, and then you won't even know where to look.

An open source project wouldn't have those issues (someone at least understands all the code, and most edge cases have likely been ironed out) plus then you get maintenance updates for free.


ive got ten years at faang in distributed systems, I know a good solution when i see one. and o3 is bang on


If you thought about it for two weeks beforehand and came up with nothing, I have trouble lending much credence to that.


the commenter never said they came up with nothing, they said o3 came up with something better.


So 10 years at a FANG company, then it’s 15 years in backend at FANG, then 10 years in distributed systems, and then running interviews at some company for 5 years and rising capital as founder in NYC. Cool. Can you share that chat from o3?


How are those mutually exclusive statements? You can't imagine someone working on backend (focused on distributed systems) for 10-15 years at a FANG company. And also being in a position to interview new candidates?


Who knows but have you read what OP wrote?

"I just used o3 to design a distributed scheduler that scales to 1M+ sxchedules a day. It was perfect, and did better than two weeks of thought around the best way to build this."

Anyone with 10 years in distributed systems at FAANG doesn’t need two weeks to design a distributed scheduler handling 1M+ schedules per day, that’s a solved problem in 2025 and basically a joke at that scale. That alone makes this person’s story questionable, and his comment history only adds to the doubt.


> and his comment history only adds to the doubt

for others following along: the comment history is mostly talking about how software engineering is dead because AI is real this time with a few diversions to fixate on how overpriced university pedigrees are.


its not dead, its democratized


I was pointing out that if you spent 2 weeks trying to find the solution but AI solved it within a day (you don’t specify how long the final solution by AI took), it sounds like those two weeks were not spent very well.

I would be interested in knowing what in those two weeks you couldn’t figure out, but AI could.


it was two weeks tossing around ideas in my head


idk why people here are laser focusing on "wow 2 weeks", I totally understand lightly thinking about an idea, motivations, feasibility, implementation, for a week or two


Who hired you and why are they paying you money?

I don't want to be a hater, but holy moley, that sounds like the absolute laziest possible way to solve things. Do you have training, skills, knowledge?

This is an HN comment thread and all, but you're doing yourself no favors. Software professionals should offer their employers some due diligence and deliver working solutions that at least they understand.


So you could stick your own copyright notice on the result, for one thing.


What's the point holding copyright on a new technical solution, to a problem that can be solved by anyone asking an existing AI, trained on last year's internet, independently of your new copyright?


Someone raised the point in another recent HN LLM thread that the primary productivity benefit of LLMs in programing is the copyright laundering.

The argument went that the main reason the now-ancient push for code reuse failed to deliver anything close to its hypothetical maximum benefit was because copyright got in the way. Result: tons and tons of wheel-reinvention, like, to the point that most of what programmers do day to day is reinvent wheels.

LLMs essentially provide fine-grained contextual search of existing code, while also stripping copyright from whatever they find. Ta-da! Problem solved.


All sorts of stuff containing no original ideas is copyrighted. It legally belongs to someone and they can license it to others, etc.

E.g. pop songs with no original chord progressions or melodies, and hackneyed lyrics are still copyrighted.

Plagiarized and uncopyrightable code is radioactive; it can't be pulled into FOSS or commercial codebases alike.


There is one very specific risk worth mentioning: AI code is a potentially existential crisis for Open Source.

An ecosystem that depends on copyright can't exist if its codebase is overrun by un-copyrightable code.


It's not an existential crisis. You just don't merge radioactive contributions.

If it sneaks in under your watchful radar, the damage control won't be fun though.


yeah unless you have very specific requirements I think the baseline here is not building/designing it yourself but setting up an off-the-shelf commercial or OSS solution, which I doubt would take two weeks...


Dunno, in work we wanted to implement a task runner that we could use to periodically queue tasks through a web UI - it would then spin up resources on AWS and track the progress and archive the results.

We looked at the existing solutions, and concluded that customizing them to meet all our requirements would be a giant effort.

Meanwhile I fed the requirement doc into Claude Sonnet, and with about 3 days of prompting and debugging we had a bespoke solution that did exactly what we needed.


the future is more custom software designed by ai, not less. alot of frameworks will disappear once you can build sophisticated systems yourself. people are missing this


That's a future with a _lot_ more bugs.


youre assuming humans built it. also, a ton of complexity in software engineering is really due to having to fit a business ___domain into a string of interfaces in different libraries and technical infrastructure


What else is going to build it? Lions?

The only real complexity in software is describing it. There is no evidence that the tools are going to ever help with that. Maybe some kind of device attached directly to the brain that can sidestep the parts that get in the way, but that is assuming some part of the brain is more efficient than it seems through the pathways we experience it through. It could also be that the brain is just fatally flawed.


That's a future paid for by the effort of creating current frameworks, and it's a stagnant future where every "sophisticated system" is just re-hashing the last human frameworks ever created.


Bingo. LLMs are consuming data. They cannot generate new information, they can only give back what already exists or mangle it.

It is inevitable that they will degrade the total sum of information.


I think comparing US politicians and Chinese politicians in this way is very disingenuous. Our political systems are incredibly different on a foundational level.

And to further make the point, the US has been the center of innovation for decades. There’s a reason Silicon Valley is in the US. Other countries are catching up, but wasn’t that always to be expected?


Is 3 even true?


Trump said Canada is stealing from him by honouring a trade agreement that he negotiated.

America is a clown state.


I honestly can’t tell if this is satire or not


your manager thinks this is serious

who cares what you think?


My manager thinks he has job security. We've learned to not care what they think either, both as superiors and subordinates.


Nice to see!


Couldn’t this backfire by possibly generating a large AWS bill?


they would copy the file offline and decrypt it there


It costs money to serve S3 objects out to the internet though. S3 GET request billing + the usual AWS egress fees, after you've burned through the free quotas. Egress is currently $0.09 per GB + tax.



even failed (403) PutObject costs money.


I’m wondering the same. If for example, to create GPT-5, OpenAI could credit some % of progress due to research conducted by LLMs with minimal/no human interaction, it would be a nice marketing piece.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: