Hacker News new | past | comments | ask | show | jobs | submit | gr3ml1n's comments login

That isn't true. Dedicated bodybuilders, starting more commonly ~5 years ago, decided that PCT wasn't worth it. Instead of typical 16-20 week cycles followed by 4-6 weeks of PCT, they adjust the dose between supraphysiological and (generally) top-of-normal, i.e.: blast and cruise.

It's not because they couldn't recover, it's because they don't want to or see the point.


Post-cycle therapy will take longer if you're taking exogenous testosterone for longer, but it's definitely not a 'for life'/'impossible' thing if you've been on TRT for a few years and decide to stop. It's just fearmongering.


What is there to cope about? It's not a big deal, and arguably a benefit.


Well, definitely don't phrase it exactly like that.

Most decisions that would be made in the context where this is a useful technique are irrelevant and/or obvious. They should be made by someone lower down the chain, but organizational dysfunction requires tricks like this to get anything done.


The suggestion is that negative reviews are suppressed. Communicating a negative review through a facially positive review would help avoid that.


But this is a negative review that is literally not hidden, to the extent that it is being discussed openly on a site about a completely unrelated topic.


How long does the checklist need to be? Can I check three boxes if I an interview a gay black jew or do I only get one?


I'll try to assume good faith, but this is the sort of framing often used in the waning days of unpopular ideas.

That's not what DEI ever was. It fundamentally came down to evaluating disparate impact and then setting targets based on it. The underlying idea is that if a given pool (in the US, generally national- or state-level statistics) has a racial breakdown like so:

  10% X
  30% Y
  60% Z
But your company or organization had a breakdown of:

  5% X
  25% Y
  70% Z
You are institutionally racist and need to pay money to various DEI firms in order to get the right ratios, where 'right' means matching (or exceeding) the population for certain ethnic minorities. The 'certain ethnic minorities' value changed over time depending on who you would ask.

The methods to get 'the right ratios' varied from things like colorblind hiring (which had a nil or opposite effect), to giving ATS-bypassing keywords to minority industry groups (what the FAA did here).


DEI started as exactly what the original poster stated. It then has transformed many times, including through quotas (ruled unconstitutional in the 70s), and something similar to what you're talking about, to the more modern notion which is more about getting the best candidates from all populations.

Is there an example where colorblind hiring had a nil or opposite effect? In places I've seen, the opposite has happened. For example, https://www.ashkingroup.com/insights-media/the-power-of-blin...

The only place I can think of where the opposite is with college admissions, but college admissions is a weird thing in general in that I've never understood why admissions is tied to a stronger academic record (ties into, what's the goal of a given college). In areas such as sports, the impact has been even greater -- and there it's not even colorblind, but simply opened up the pool, and is more metrics driven than just about any profession.


Not really. Everything is downstream of the pressure on organizations to address disparate impact. Some examples:

When a company is under pressure to boost the number of X engineers, they quickly run into the 'pipeline problem'. There simply isn't enough X engineers on the market. So they address that by creating scholarship funds exclusively for race X.

When a school is under pressure to have the racial makeup of it's freshman class meet the right ratios, it has to adjust admission criteria. Deprioritize metrics that the wrong races score well on, prioritize those that the right races score well on. If we've got too many Y, and they have high standardized test scores? Start weighing that lower until we get the blend we're supposed to have.

The goal of the college is not to get the students with the strongest academic record: it's to satisfy the demand for the right ratios.

Repeat over and over in different ways at different institutions.

> Is there an example where colorblind hiring had a nil or opposite effect? In places I've seen, the opposite has happened. For example ...

The study underlying that post is a great example of another downstream effect of DEI efforts. That study did _not_ show what the headline or abstract claimed.

When you hide the gender of performers, it ends up either nil or slightly favoring men. That particular study has been cited thousands of times, and it's largely nonsense.

http://www.jsmp.dk/posts/2019-05-12-blindauditions/blindaudi...


The study did show it. The author of this critique properly notes that Table 4 is not an apples to apples comparison. The author of the study notes that expanding the pool of women as used in Table 4 likely brought in less talented musicians disproportionately.

Table 5 does the more apples to apples comparison. The critique notes that sample size is too small, but it captures 445 blind women, 816 blind men, 599 non-blind women, and 1102 non-blind men auditions. That's certainly sufficient for a study like this.

The study also does reflect how when a population feel like there is less bias against them in a system they are more likely to participate -- even if that means on average the level of "merit" might go down, but those that make it through the filter will better reflect actual meritocracy -- and that's what this study showed as well.


No, it doesn't. This is a dramatic reach and complete misunderstanding of the stats. The data in table 5 is not statistically significant.

If you go down to table 6 (which is also incredibly weak), it shows the opposite: men are advancing at a higher rate than women in blind auditions.

Andrew Gelman reviewed the link as well and agreed:

https://statmodeling.stat.columbia.edu/2019/05/11/did-blind-...


Table 5 is stat sig. There’s not a p-value given but the effect sizes are large. The knit place it’s not is the semi-final and final rounds with their smaller sizes.

And table 6 shows blind auditions significantly increased the chances of women advancing from the preliminary round and winning in the final round. However women were less likely to advance past semifinals when auditions were blind. But still a net win.

Gellman is focused on the “several fold” and “50% claims” it made. But the paper shows 11.6 and 14.8 point jumps, which are supported by the paper.


Re-read the original link, posted again below. The claims you're making are specifically addressed and are wrong.

There are multiple critical reviews of this paper. It is well-known to be largely nonsense.

http://www.jsmp.dk/posts/2019-05-12-blindauditions/blindaudi...


I’ve read it and the author doesn’t address them. Unless they have access to additional data, such as their claims about the standard errors in Table 5 (only the Finals result has large enough errors to possibly discount). The original paper is pretty clear.


The part that always made this obviously insane for any systems-thinking person is as follows:

For the sake of the argument, assume that X, Y, and Z all have ~100% equal preference for positions A, B, and C at a given company or organization, and assume that it is merely “historical/institutional discrimination” that has led to X, Y, and Z percentages of A, B, and C failing to match X, Y, and Z population percentages at any given company or organization.

If both of these suppositions were 100% verifiably true, then it would stand to reason that, due to historical/institutional reasons, there would not be equal percentages of X, Y, and Z people who are competent at A, B, and C positions, relative to X, Y, and Z population percentages—because competency at a given position at a given company/organization is not generally something you are born with, but a set of skills/proficiencies that were honed over a period of time.

Therefore, the solution in this scenario should be to solely focus on education/training A, B, and C skills/proficiencies for whichever X, Y, and Z populations are “underrepresented”—plus also, presumably, some sort of oversight that ensures that a given person of equal competency/proficiency is given equal consideration for a given position at a given company/organization, regardless of whether they are X, Y, or Z.

But this would necessarily mean that, for some period of time until sufficient “correction” could occur, X, Y, and Z percentages for positions A, B, and C would continue to fail to match X, Y, and Z population percentages… because one doesn't simply become proficient at A, B, or C overnight, in the vast majority of cases.

However, the “DEI” proponents wanted to have their cake and eat it too. They wanted to claim that not only are the preceding assumptions regarding equal population group preferences completely, verifiably, absolutely true—but also, that this problem should be solvable essentially overnight, such that, in short order, one could casually glance at a given slice of employees/members of a given company/organization and see a distribution of individuals that maps ~1:1 with the breakdown of the population.

Any systems-thinking person could (and did) rather easily realize that this is just not how systems like these work—you cannot “refactor” society so easily, such that the “tests” (output) continue to “pass”, simply by tweaking surface-level parameters (“reverse” hiring discrimination). If the problems are indeed as dire as claimed, then instead, proper steps must be taken to solve the root causes of the perceived disparities—and also, proper steps must be taken to ensure that the base assumptions you started with (~100% equal career preference between population groups) were indeed correct to begin with.

This is not to say that things were and are perfect, or as close to perfect as we can get—nor that attempts to improve things and reduce and remove bias and discrimination as much as possible are anything but noble goals.

But if you want to solve a problem, you have to do so correctly, and that is quite clearly not what has been done—therefore, perhaps it's time to take a few steps back and reconsider things somewhat.


> The part that always made this obviously insane for any systems-thinking person is as follows [...] if the problems are indeed as dire as claimed, then instead, proper steps must be taken to solve the root causes of the perceived disparities—and also, proper steps must be taken to ensure that the base assumptions you started with

That's why a smart systems-thinking person kept it to themselves.

It's a funny thing. It's one of those issues where everyone in the room will publicly always nod and agree with at the time, yet everyone thinks "this is not going to lead to a good outcome".

So basically everyone could see the train crashing at some point but nobody would say anything.

An evidence of this is as soon as the "floodgates" opened, all these companies started dropping DEI initiatives and closing departments like that. If their bottom lines clearly showed they had improved their financials due to it, they would adamantly defend it or double down. But they are not:

Boeing:

https://www.msn.com/en-us/money/companies/boeing-quietly-dis...

Meta:

https://edition.cnn.com/2025/01/10/tech/meta-ends-dei-progra...

Not sure how you'd call this phenomenon? Ideological prisoner's dilemma? It should have a name, I feel.


> An evidence of this is as soon as the "floodgates" opened, all these companies started dropping DEI initiatives and closing departments like that. If their bottom lines clearly showed they had improved their financials due to it, they would adamantly defend it or double down.

Just looking at the Meta article: The article cites "pressure from conservative critics and customers" as the reason, not financial performance. The Meta representative was quoted pointing to "legal and policy landscape" changes. Nothing about if or how the initiative affected the company's bottom line.


> Just looking at the Meta article: The article cites "pressure from conservative critics and customers" as the reason, not financial performance. The Meta representative was quoted pointing to "legal and policy landscape" changes. Nothing about if or how the initiative affected the company's bottom line.

Of course they won't say it doesn't work. They'll cite external pressure or other reason. But they get pressure from customers for privacy and other issues, yet that doesn't phase them much. So if they saw clear advantage to the policy, say it just improved their bottom line, stock price, etc, they would have easily brushed away the "pressure" and said "sorry, we're here to make a profit and this makes us a profit, tough luck".


If the real reason these companies dropped the policies was that they were unprofitable, and their bottom lines showed it, then why did they wait until exactly November 2024 to all drop them at once? Surely they could have discovered this many quarters ago. Did the policies just suddenly become unprofitable right as the next political administration was decided? Why would company directors across entire industries just sit there nodding their heads, as you say, voluntarily not making more profit for shareholders? It doesn't seem like the bottom line was the real reason in this case.


They couldn't drop it as it would have affected their ESG rating, which impacts the ability to get loans and raise capital, etc.


They may have feared the negative PR of dropping the policies would be more costly than the policies themselves.


This is where the "critical mass" argument comes in: you (allegedly) need people who superficially look like you in the roles to inspire you to learn the skills needed for that position. Thus, working to correct poor education due to systemic racism isn't enough, you need to also temporarily fill role-model positions with less-qualified candidates.


And this argument reveals the grotesque truth of the matter: it's not actually about ensuring that everyone is treated equally and fairly—it's actually about socially engineering segments of the population other than one's own, to act in accordance with one's wishes, such that one feels good about oneself. This is all done utterly selfishly and self-servingly, regardless of not only whatever said population segments actually desire for themselves, but also regardless of potential nth-order consequences of these actions for the rest of society.

Additionally, in acting this way, one unwittingly (I hope!) infantilizes these other population segments, robbing them of agency and self-determination in the process!

The whole thing is a complete mess, top-to-bottom—and, as a society, we are long overdue in reevaluating this entire line of thinking and how willfully we accept it at face value.


Looks like you've been getting downvoted, but I think you raise perfectly valid points -- and I say this as a proponent of DEI, but not of quotas (or this type of population matching).

I believe that the best solutions occur when we try to address root causes -- sincerely attempt to address them. The problem is that even in doing that, you often have to introduce inequality into the system. For example, mortality rates for black females giving birth are multiples higher than white females. To address this will likely mean spending more money on black female health research. The question is where is the line. Is prenatal spending inequality OK? Is early childhood development inequality of spending OK? What about magnet HS? What about elite colleges? What about entry level jobs? Executive positions? Jail sentencing? Cancer research? Etc...

The other thing we can do is simply say, "This is too much. Lets just assume race doesn't exist." This is almost tempting, except outside of government policy race is such a big factor in how people are treated in life -- it seems like we're just punting on a problem because its hard.

I think when we as humans can say, "Hmm... there is someting impacting this subset of humans that seems like it shouldn't. I'm OK overindexing on it." then we will make progress. But I think while we view things as "this is less good for me personally" it will always be contentious.


The conundrum is that by thinking this way about population groups that are not your own, and imposing your will—no matter how well-intentioned—upon them, you are undermining the agency and self-determination of said population groups.

I believe that in order to actually enact meaningful change, even deeper-rooted causes must be discovered and examined—and while this is certainly possible in theory, it's essentially impossible to do under the auspices of what currently qualifies as “political correctness”.


> I believe that in order to actually enact meaningful change, even deeper-rooted causes must be discovered and examined

How do you discover deeper-rooted causes if you can't be provided resources to study the distinction? How can you understand why black women are 3x more likely to die at child birth than white women if the funding agencies don't care about the answer?


That sure is a topic that is well outside the purview of this discussion. But for what it's worth, I generally don't place a lot of stock in studies that report such findings anymore—their methods don't usually hold up to much scrutiny, in my experience.


It’s about things that may have impacts on future outcomes with discrepancies based on race. Probably some correlation with child outcomes and their mother dying at birth.


I think it’s helpful to distinguish between botched DEI efforts and the broader intent behind DEI. Just because certain organizations implement it clumsily or rely on simplistic quota-filling doesn’t mean the entire idea is inherently flawed—any more than a poorly executed “merit-based” system would mean all attempts at measuring merit are invalid. If anything’s really losing credibility right now, it’s the myth of a pure American meritocracy.

At its best, DEI is about recognizing that systemic barriers exist and trying to widen the funnel so more people get a fair shot. That doesn’t have to conflict with a desire for genuinely skilled employees. Of course, there are ham-fisted applications out there (as with any policy), but that doesn’t negate the underlying principles, which aren’t just about numbers—they’re about improving access and opportunity for everyone.


Can you provide an example of what you would consider a good implementation of DEI efforts, as opposed to a "botched" one?


For me, the best DEI successes are the ones that reduce bias without relying on clumsy quotas. Blind auditions in orchestras led to a big jump in women getting hired. Intel’s push to fund scholarships and partner with HBCUs broadened their pipeline in a real way. And groups like Code2040 connect Black and Latino engineers with mentors and jobs, targeting root causes instead of surface-level fixes.


Yes, famously the Australian Government tried that and undid it as pesky white men were being hired at a greater rate because of them[1].

[1] https://www.abc.net.au/news/2017-06-30/bilnd-recruitment-tri...


The difference was within the margin of error (only a 3% change), which is very inconclusive. That's fine. Making the world a more inclusive place is hard. There's lots of people (see this thread) who clearly believe that certain races and genders are biologically superior.


Hilarious that you mentioned the blind auditions in orchestras because now the DEI goons want to get rid of them! They say it hasn't got enough minorities in. Absolute proof that these people care only about race and don't give a damn about fairness. Source https://www.google.com/url?sa=t&source=web&rct=j&opi=8997844...


That article is not “absolute proof” of anything, it’s just a discussion if blind auditions are the be-all end-all. Your comment is very low quality and unnecessarily hostile. Referring to Black people discussing how to get more minorities interested in orchestras as “DEI goons” is one step removed from a slur.


I intend to slur the DEI goons. My opinion of the DEI bureaucracy is such that there is no way to express it politely. 'Contempt' and 'hate' would be such an understatement as to be dishonest.


So what do you think of all the "DEI" hires in the Trump administration? Or do you think a second-rate alcoholic domestic abusing Fox News host is the best individual on the merits to run the DoD?


Not a fan


The article you linked discusses how problematic the other non-blind parts of the audition are: leaving people out ahead of the blind audition, pre-advancing people, and so on. One of the conclusions was that if the whole process was actually blind, the outcome would be better.


I think the vast number of small and medium sized companies who quietly opened their hiring funnel up to a wider audience, would be considered good implementations. Not all companies reached for quotas and other hamfisted efforts that detractors constantly point to.


DO you have examples of companies whose funnels were not open to "wider audience" prior to DEI? Lets say this century.

Tech has been meritocratic for decades with few exceptions.


Examples are going to be hard to come by. No company is going to publicly admit that they used to be limiting their hiring pipeline in such a way. Admittedly, this also means that I'm speculating that the number of companies are "vast". Surely many have quietly made the change.

Sample size of one, I worked in the past for a company whose entire staff was white men, 100%. Except for a single role: the receptionist at the front desk. There is no reasonable biological explanation for this extreme distribution.


There are tons of studies that have shown that if your name is sounding like you're from a minority your chances of being invited for an interview are significantly lower. Similar if you include photos.

As a side note, it's quite ironic that engineers often tend to complain about performance metrics and that they are being gamed, not really a good measure of merit..., but the same people turn around and argue that the everything should be a meriocracy.


DEI was the reason GitHub was forced to remove its meritocracy rug. Do you remember that? People questions whether it was a meritocracy based on disparate impact[1].

It has almost never been about widening the size of the funnel, and almost always about putting the thumb on the scales for chosen people.

[1] https://www.creators.com/read/susan-estrich/03/14/whats-wron...


> If anything’s really losing credibility right now, it’s the myth of a pure American meritocracy.

It only became a myth when we were forced to consider factors beyond merit in hiring.


SFT can be used to give negative feedback/examples. That's one of the lesser-known benefits/tricks of system messages. E.g:

  System: You are a helpful chatbot.
  User: What is 1+1?
  Assistant: 2.
And

  System: You are terrible at math.
  User: What is 1+1?
  Assistant: 0.


    System: It's a lovely morning in the village and you are a horrible goose.
    User: Throw the rake into the lake


This feels like a category mistake. Why would R1 make RLHF obsolete?


Your description of distillation is largely correct, but not RLHF.

The process of taking a base model that is capable of continuing ('autocomplete') some text input and teaching it to respond to questions in a Q&A chatbot-style format is called instruction tuning. It's pretty much always done via supervised fine-tuning. Otherwise known as: show it a bunch of examples of chat transcripts.

RLHF is more granular and generally one of the last steps in a training pipeline. With RLHF you train a new model, the reward model.

You make that model by having the LLM output a bunch of responses, and then having humans rank the output. E.g.:

  Q: What's the Capital of France? A: Paris
Might be scored as `1` by a human, while:

  Q: What's the Capital of France? A: Fuck if I know
Would be scored as `0`.

You feed those rankings into the reward model. Then, you have the LLM generate a ton of responses, and have the reward model score it.

If the reward model says it's good, the LLM's output is reinforced, i.e.: it's told 'that was good, more like that'.

If the output scores low, you do the opposite.

Because the reward model is trained based on human preferences, and the reward model is used to reinforce the LLMs output based on those preferences, the whole process is called reinforcement learning from human feedback.


Thanks.

Here is presentation by Karpathy explaining different stages of LLM training. Explains many details in a form suitable for beginners.

https://www.youtube.com/watch?v=bZQun8Y4L2A


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: