Hacker News new | past | comments | ask | show | jobs | submit | tibbar's comments login

One of my favorite algorithms for this is Expectation Maximization [0].

You would start by estimating each driver's rating as the average of their ratings - and then estimate the bias of each rider by comparing the average rating they give to the estimated score of their drivers. Then you repeat the process iteratively until you see both scores (driver rating, and user bias) converge.)

[0] https://en.wikipedia.org/wiki/Expectation%E2%80%93maximizati...


Flame graphs are definitely less sophisticated than Superluminal/Tracy/etc, but that's a part of the attraction - you can visualize the output of many profiling tools as a flamegraph without prior setup. I also think it's a pretty good UX for the "which function is the performance bottleneck" game.

The submitted title buries the lede: RDS for PostgreSQL 17.4 does not properly implement snapshot isolation.

Folks on HN are often upset with the titles of Jepsen reports, so perhaps a little more context is in order. Jepsen reports are usually the product of a long collaboration with a client. Clients often have strong feelings about how the report is titled--is it too harsh on the system, or too favorable? Does it capture the most meaningful of the dozen-odd issues we found? Is it fair, in the sense that Jepsen aims to be an honest broker of database safety findings? How will it be interpreted in ten years when people link to it routinely, but the findings no longer apply to recent versions? The resulting discussions can be, ah, vigorous.

The way I've threaded this needle, after several frustrating attempts, is to have a policy of titling all reports "Jepsen: <system> <version>". HN is of course welcome to choose their own link text if they prefer a more descriptive, or colorful, phrase. :-)


Given that author and submitter (and commenter!) are all the same person I think we can go with your choice :)

The fact that the thread is high on HN, plus the GP comment is high in the thread, plus that the audience knows how interesting Jepsen reports get, should be enough to convey the needful.


long time lurker here who registered on HN many years ago after reading Jepsen: Cassandra

the Jepsen writeups will surely stand the test of time thank you!


And your comment also...In Multi-AZ clusters.

Well this is from Kyle Kingsbury, the Chuck Norris of transactional guarantees. AWS has to reply or clarify, even if only seems to apply to Multi-AZ Clusters. Those are one of the two possibilities for RDS with Postgres. Multi-AZ deployments can have one standby or two standby DB instances and this is for the two standby DB instances. [1]

They make no such promises in their documentation. Their 5494 pages manual on RDS hardly mentions isolation or serializable except in documentation of parameters for the different engines.

Nothing on global read consistency for Multi-AZ clusters because why should they.... :-) They talk about semi-synchronous replication so the writer waits for one standby to confirm log record, but the two readers can be on different snapshots?

[1] - "New Amazon RDS for MySQL & PostgreSQL Multi-AZ Deployment Option: Improved Write Performance & Faster Failover" - https://aws.amazon.com/blogs/aws/amazon-rds-multi-az-db-clus...

[2] - "Amazon RDS Multi-AZ with two readable standbys: Under the hood" - https://aws.amazon.com/blogs/database/amazon-rds-multi-az-wi...


> They make no such promises in their documentation. Their 5494 pages manual on RDS hardly mentions isolation or serializable

Well, as a user, I wish they would mention it though. If I migrate to RDS with multi-AZ after coming from plain Postgres (which documents snapshot isolation as a feature), I would probably want to know how the two differ.


I emailed the mods and asked them to change it to this phrase copy-pasted from the linked article:

> Amazon RDS for PostgreSQL multi-AZ clusters violate Snapshot Isolation


(The mods replied above; thank you!)

Par for the course

Protein folding is one of the oldest and hardest problems in computational biology. It is fair to describe the result of protein folding as a 3D model/visualization of the protein. DeepMind's AlphaFold was a big breakthrough in determining how arbitrary structures are folded. Not always correct, but when it is, often faster and cheaper than traditional methods. I believe the latest versions of AlphaFold incorporate transformers, but it's certainly not a large language model like ChatGPT.

> Is this a bad idea? Is this a good idea? Is this necessary if I want to be employable in the future?

I wish anyone could really tell you. I mean, I sure don't know, and I've been a hiring manager in SF for years. I've done countless interviews, read countless resumes, all that. I can tell you what I would do if I were trying to bootstrap my way into the industry, though:

1. Do something difficult and unusual in technology, and do it in public. Basically set a goal that sounds crazy to achieve, something that would require an unreasonable amount of effort and time, and then go do it and publicize it. Note that getting a degree is difficult and time-consuming, but not really very unusual or impressive.

2. Interact with real people in technology as much as possible. Not just "networking", but actually immersing into the tech community, learning all the events and meetups and hackathons and doing as much of that as possible. Note that a degree will probably help you meet a lot of other students, but not necessarily active tech professionals.

As with all challenging goals, my real goal would be to spend a certain amount of time on them every day, taking whatever the next step is.

I am quite confident that if you put as much effort into this path as you would a degree, you will land a better job, sooner than the other way.


Thank you. I'm not really looking for a job at the moment, I'm just having what is maybe an irrational fear of being made obsolete. I've hit the point of no return. I've been out of the automotive game for a while now, and I think I might rather die than go back to that.

Maybe I've done too much doomscrolling on Linked In.


As with most things in life, small iterative improvements are usually the most reliable path. You've already got contract work, so try to get more of that, maybe you can get a fulltime job with one of those clients, now you have a resume item, etc. It's the same for most everyone - we get a little experience, a bit of a resume, and one day we get another step on the ladder, and one day another.

Actually doing a whole degree program is just so much freakin' work and money for a single line on your resume that I can think of a lot of equally time consuming things that would have a better payoff. I've done night classes too and - whew. Never again.


Yeah. So far the iterative approach has been working out. I haven't really tested the waters in a while.

I just wish that "single line" wasn't such a deal breaker.


It is indeed unfortunate, and for what it's worth I think you are not the person people are generally trying to filter out by requiring a degree, given your skills and track record. So you probably would benefit from meeting more people in the industry in person. It also sounds like you've made good choices so far in getting your new career started. I hope you get everything you're going for.

Yeah you're probably right. Maybe I'll do what I'm doing for the time being, and start spending time applying once when or if the market gets better. I was in a much different spot last time I sent out an application. I've learned a lot since then.

That seems a lot like "draw the rest of the owl"[1] unfortunately. How will an otherwise relative newcomer know what would be considered challenging, how to go about making it happening, and that they can make it happen?

[1]: https://knowyourmeme.com/memes/how-to-draw-an-owl


If I were in this situation, and the goal is to draw an owl, and I can spend a year on it, then I'm going to spend an hour tonight looking at owls and maybe doodling a bit. I'm not going to worry about getting the owl perfect right away. Maybe next week I'll get super into owl feet, and the week after into learning good drawing posture. At some point I'll definitely be attending some meetup of nature-drawing enthusiasts.

I'm going to combine a general direction with a lot of time and horsepower and exploration and I will end up with a great owl drawing at the end. The odds are that I end up drawing the owl after only a few weeks because it's not as hard as I thought, but I discover some other really cool goals with a better payoff by then.

There's a lot of alpha in spending an unreasonable amount of time on interesting goals.

To answer your questions directly:

> How will an otherwise relative newcomer know what would be considered challenging

Just pick something that sounds challenging to you! You will learn a lot about what the scene considers challenging/interesting as you go. You can always update the goal.

> how to go about making it happening

Research. Start with stupid questions about the parts that are initially apparent. Keep a list of things that you don't know how to even begin to tackle, and over time, deep dive into items on the list. You will find that the resources/tricks/approaches you have grows as you go.

> and that they can make it happen?

You just have to really, really believe in yourself and in what you're trying to do. If you keep your health, that is all you really need.


A reference to Larry Ellison as a lawnmower, perhaps? [0]

> Do not fall into the trap of anthropomorphising Larry Ellison. You need to think of Larry Ellison the way you think of a lawnmower. You don't anthropomorphize your lawnmower, the lawnmower just mows the lawn, you stick your hand in there and it'll chop it off, the end. You don't think 'oh, the lawnmower hates me' -- lawnmower doesn't give a shit about you, lawnmower can't hate you. Don't anthropomorphize the lawnmower. Don't fall into that trap about Oracle. — Brian Cantrill (https://youtu.be/-zRN7XLCRhc?t=33m1s)

[0] https://news.ycombinator.com/item?id=15886728


Generally this is relevant advice for thinking about important people. We know little about them, almost all of it is projection that reflects more of my perspective than any reality of the object’s psychology.

Humans love to think we know why someone behaves the way they do. We love to diagnose disorders in strangers based on a very very tiny bit of information.

It is best to treat the decisions as black boxes, or else we are just projecting. I think it’s called the fundamental attribution bias?


No, the takeaway from that talk isn't that we shouldn't judge Ellison's intentions. Quite the opposite, actually. Bryan Cantrill states that Ellison's motives are simple. It's only about money and no other human emotions are involved.

There are so many quotes indicating this:

"What you think of Oracle is even truer than you think it is. There has been no entity in human history with less complexity or nuance to it than Oracle."

"This company is very straightforward in its defense. It's about one man, his alter ego, and what he wants to inflict upon humanity! That's it!"

"If you were to about ask Oracle, 'Oracle what are you about? Larry, what are you about? Why Oracle? Tell me about Oracle.' 'Make money.' ' Okay, yeah yeah I get it.' 'Make money. Make money. Make money. That's what we do. Make money.'"

"The lawn mower can't have empathy!"


I have to disagree with your interpretation. “Inflict”, for example, is a very loaded word that speaks to intent or at least the mindset.

I think we all feel like we know who movie stars and celebrities really are. When, in reality, not only do we not know their motivation - often we don’t even know our own.


I mean you can personally disagree with the talk, but what's being said is pretty clear here. The quotes in my previous comment alone make what's unambiguously a judgement about Ellison, and there's so much more from the original talk. Bryan Cantrill even likens Ellison to Nazis in a number of his other talks btw, so it can't get any clearer than that.

Me personally, I think it's fair to judge people by their actions. When a person is amassing billions in wealth at the expense of everyone else, there's nothing more meaningless than wasting time imagining about how that person might possibly be kind and well-meaning deep down.


Idk.

When you own 98% of Lanai, have a net worth equivalent to the annual gross product of a mid-sized American metropolitan area, and still feel the need to lay off thousands of people to increase your net worth at age 80, that's not a very, very tiny bit of information.

That's a person being presented with the knowledge that his choices will have a very clear set of consequences for society and proceeding with them anyways. Know the "if you press the button, you'll become a millionaire, but someone you don't know will die" thought experiment?

Larry has, multiple times, been told that if he presses the button, he'll get millions of dollars at the extreme expense of people he doesn't know, and done it. I think it's fair to say that at least one person has died from it; mass layoffs result in one additional suicide per 4200 male employees and one per 7100 female employees [0]

[0]https://www.thelancet.com/journals/lanpub/article/PIIS2468-2...


Protecting people from sudden loss of income is the responsibility of government, not individual businesses.

It would be, but individual businesses (particularly those with the resources of Oracle) don't like paying the taxes necessary to offer that sort of social safety net past a certain point.

In a democracy, that is a voter issue.

Interesting how the buck always ends up getting passed to those with the least amount of power.

Interesting how helpless voters (and those who could vote but don't) are portrayed, especially in the age of instant access to information in everyone's pocket.

Especially considering elections in recent years.


You get a choice of not one but two horrible candidates! In a dictatorship you'd only have one. Be happy!

That must be how democracies all over the world prevailed over dictatorships (at least for a couple hundred years).

Or a campaign donation issue.

Humans feel better “knowing” something than not knowing something (might be called ego or something).

oh that's very interesting. I've used this idea before in solvers but did not know that this is what it's called!

If the company can build a big user base first, then they become a possible acquisition target in the future by the LLM company for their distribution, ala Windsurf selling itself to OpenAI.

While it's an idealized/toy setting, yes, these are both real categories of sensors. In particular, Sensor B, the "weird one", is just a system that has some defect/failure rate. An example might be a system that only works during the day and fails at night because it uses a camera. Or maybe a camera that's taking pictures from behind a helicopter rotor so it's frequently obstructed. Or maybe you are actually using a bunch of sensors and some of them are broken. (Of course, you'd have to tweak it a bit to get a scenario where every measurement is truly a 50% random split between meaningful and non meaningful, and you can't easily tell the difference, but as I said, this post is an idealized/toy setup.)


The advisors have backgrounds in social work, public health, advising public officials, etc. Not really any background in AI or technology that I saw. So it's an interesting group that, I suppose, would help you know if you're making something that's actually beneficial to the public. Not sure if they'd be able to see past any obfuscation that OpenAI might be incentivized to put up. It is not exactly what I was expecting to see, but at least they seem like good people?


The road to hell is paved with good intentions.

All OpenAI has to do is to pick people who believe in the state of things in a way that is profitable for OpenAI.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: