Hacker News new | past | comments | ask | show | jobs | submit | yelnatz's comments login

Kids who are smart and curious have already been exposed to the internet since they were 8/9 years old. So a few years of learning they're already knowledgeable by the time they're in their teens.

No responsibilities and all the time in the world to learn with the right circumstances, they can go far.

Older devs who are experts in this area are already busy making money and working on their employer's security.


They allow a 3rd party to change Kernel stuff with an update. Apple banned this a while ago.


It is supposed to be like that.

Antivirus software always works as a driver in the kernel, no other way. You'll get the same in Linux, for example. In MacOS it may be slightly better (if I remember right Darwin is a micro-kernel), but in fact a broken driver still can crash the system there.


> Antivirus software always works as a driver in the kernel, no other way.

You're confidently wrong: https://developer.apple.com/support/kernel-extensions/


This page is only about using some APIs, that are now supposed to be called through wrappers. I would say it significantly limits the developers, and also may introduce additional flaws.


Yet it is how antivirus works on Mac now.


*banned

Made it a lot harder for everyone involved, but still possible, as it’s a very useful technique.

PS: Since I'm being downvoted, here is the link showing that it is still possible using Reduced Security:

https://support.apple.com/en-gb/guide/mac-help/mchl768f7291/...

I doubt Apple will completely disable kext in the near future. Making it hard enough to be impractical has most of the benefits already.


Hilarious. Rogers runs the internet and phone lines so when it went down, the devs couldn't:

1) Remote into the boxes to see what's happening.

2) Talk to other devs because their phones are on Rogers network.

My stress couldn't.


With an improvement of 87.5% to reliability compared to land-based data centers.

I wonder what factors led them to discontinue the project.


Sure, they are less likely to fail in the stable environment, but you can't get to it until you pull up the entire vessel. On land 5.9% of servers required a technician to walk over there and replace components. At sea 0.7% of servers failed irrecoverably.

On the other hand 0.7% dead hardware sounds like a marginal cost. What likely killed it was the cost to deploy and recover them, plus all the hassle with sea cables for data and power. Building glorified warehouses with redundant power and AC is likely a lot cheaper than deploying these metal tanks in the ocean.


Less noise and dust down there - I think the air is purged with nitrogen as well so no oxidization. They could do the same thing cheaper on land and recycle the waste heat. Edit: just read the article - they also state temperature stability as a factor.

Having the servers more easy to access and repair also makes them more likely to fail. Also spaces that humans can enter require a bunch of regulation that takes up space and adds to the expense. So the math could easily work out in favor of a sealed box of computers that no-one is allowed to access and any broken servers just get switched off.


That it's significantly cheaper to deal with the failures than it is to prevent them.


And I wonder if they meant 187.5% (this is better) or 87.5% (this is worse).


I would expect price and practicability of physical intervention


I remember when he was arguing with an AI (that posted a bug report) not knowing it was AI.

https://hackerone.com/reports/2298307

https://www.youtube.com/watch?v=e2HzKY5imTE&t=272s


Wow, what a frustrating situation.


Can you do a column and normalize them?

Too many zeroes for my blind ass making it hard to compare.


Yeah a Tokens per $1 column would vastly help the readability.


$/million tokens is the standard pricing metric.


standard ≠ good


"Standard" doesn't imply "good", but that doesn't mean "non-standard" is better. Cost-per-quantity (L/100km) is easier to compare than quantity-per-cost (MPG) because how much you use isn't going to change based on the model. Which is to say, if two local models are both $0.00 per million tokens, they effectively have the same cost. You could argue that you might get better results by throwing out more tokens, but the solution is to add more significant digits to the price per unit.


Yes, but it’s true in the general case. Defaults are usually the defaults for a reason — someone putting thought into what makes sense for most users.


Not necessarily so when you're trying to sell stuff...


Anyone else think this code is ugly as hell?


Windows support soon?


It's cmake based, you can download cmake and the other dependencies and just compile it.

Edit: actually no, the networking bits have some platform specific #ifdefs. The code looks not too difficult to rewrite into a WinHTTP based client, though.


Is there any reason you couldn't use WSL for something like this?


Better yet - Linux. Why would someone want windows to snoop on their trading? Let alone the poor performance of that OS.


Loads of quant and HFT funds run their trading software on Windows.


Can you name some, I'm curious. I assume you mean the actual trading software not just the UIs?


Loads of people do silly things :-) In all seriousness use the best tool for the task at hand, but Linux is easier to fine tune for demanding loads.


WSL is linux. HyperV runs windows and linux as guests.

It’s only slower if you trigger many world switches.


or windows update


Right? Windows should never even come into your mind. Always think Linux.


Windows will be supported. Not sure when, but likely in a month or two.


Didn't know these were called Arenas, this technique is prevalent in game development.


also called a "bump" allocator... because all it does is bump a pointer.

nice to use when you have a nicely ordered order of execution where you are guaranteed to always come back to a known position where you can free the entire heap/arena at once. (i.e. a typical main message handling loop).


You can however use bump allocation for things that are not arenas. There are some GC allocators that use the technique.


The JVM GC has a generational model.

It first allocates objects into an arena like structure. In a second step, it moves (evacuates) long lived objects into a compact region. The first region gets deallocated at once after.

Roughly speaking this leans on a heuristic that most objects are short lived. So it has arena like characteristics, but is of course managed/dynamic.

This might be one reason why managed languages like Java/C# get such good out of the box performance. You really need insight in your program and how it executes to beat this.


This is correct. In .NET Gen 0 heap, if there is sufficient space, the allocation is just getting a heap address from threadlocal, bumping an offset, writing object header and methodtable, and returning the pointer (reference).


Also called "regions".


How would you do it?


Lots of opportunities short of rearchitecting: use batching; use multi threading in lambdas; use S3 range requests; use the EXPRESS execution model; etc, etc


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: