Hacker News new | past | comments | ask | show | jobs | submit | zallarak's comments login

This outcome couldn't have been more obvious.


Ironically I think these types of exercises are part of the problem at major software companies; terrible efficiency and over hiring.

Because you need to “brag” to get rewarded, everyone ambitious has a list. And each list is nearly impossible for middle managers to evaluate. Someone may solve a hardcore engineering problem that has no business impact. Another person might redo some docs. Someone may create a design system version. Lots of token achievements, but not real work.

Real work should stand on its own and competent managers should be able to identify it. Mediocre managers rely on lists, so then people start showing up to work and making lists.


> Ironically I think these types of exercises are part of the problem at major software companies; terrible efficiency and over hiring.

They're not the problem they're a consequence of the problems.

> Because you need to “brag” to get rewarded, everyone ambitious has a list. And each list is nearly impossible for middle managers to evaluate.

This will be read mostly by your manager not a middle manager. It's up to your manager then to represent your accomplishments to middle managers and above. Good thorough middle managers will still be able to assess them though.

> Another person might redo some docs. Someone may create a design system version. Lots of token achievements, but not real work.

Competent managers can distinguish between those, if you don't have competent managers that's the problem, not the "brag doc".

> Real work should stand on its own and competent managers should be able to identify it. Mediocre managers rely on lists, so then people start showing up to work and making lists.

No because even competent managers have often a wide span at large companies and cannot be involved in the day to day details for all the work their team does and things can fall through the cracks. This would only be solvable by having first line managers have less reports or less manager overhead so they can be immersed in their team's work. I have done both, but at large companies is often not possible to be immersed in the work of all of your reports, no matter how competent you are. As mentioned in the article, even you often forget what you have done last week.


> Competent managers can distinguish between those

How do you do this without deep knowledge of what your reports are doing?


You can have deep knowledge of a given technology, but still not have an understanding of the details of what your reports are doing at some point in time in a given project. The brag document should include enough detail for the manager to understand (e.g PR, design links that the manager can review to assess what you did). It's in your best interest to keep your manager appraised regularly on 1:1s so it's easier for them to catch up and the disconnect doesn't go for a very long time.


> still not have an understanding of the details of what your reports are doing at some point in time in a given project

What’s preventing the manager from asking? More generally I think this is handled by standups (really any daily report of “here’s what I’m doing”) or some type of ticketing system.

> The brag document should include enough detail for the manager to understand (e.g PR, design links that the manager can review to assess what you did)

These should all be present in some type of ticketing system / wiki. What’s preventing the manager from using those?

Fwiw, on the other end I think managers are overworked too.


> What’s preventing the manager from asking? More generally I think this is handled by standups (really any daily report of “here’s what I’m doing”) or some type of ticketing system.

Nothing, however the level of detail is different. Daily stand ups are good to spot blockers and "take it offline" when there's a problem. If reports do not raise problems and the manager doesn't smell one, they won't go deep into understanding what you're doing. If you have many reports is hard to go deep into everything every day, particularly if you have a fullstack team or a variety of projects not closely related.

Ticketing system would be ideal if everyone was an stellar communicator, and devs, including myself (and I've been told I am a good communicator by managers when I was an IC) often won't update tickets every day with all the nuance required to understand your work deeply. Managers at large companies also are juggling many things that is impractical to do a full sync with everyone every day (hence the need for weekly or fortnightly 1:1s).

Information contained in a ticketing system also will often be filtered for "public" (anyone in the company) consumption, and there will be information (this other team is being unresponsive on chat and docs and I had to book meetings with them, etc) not reflected there. It may also often contain the 'outcome' of an investigation or status, not how you got there (more verbose communicators may include both, but that's rarely the case) which is also important to assess your work.


how can we recognize a competent manager? serious question. (Will all the incompetent managers please raise their hand)


By the sustained impact of their reports on the broader business in light of the fiscal resources consumed to obtain that impact.

(also, half raises hand).


OOHHHH great question:

A competent Manager:

* Is willing to be a leader: https://www.modernservantleader.com/wp-content/uploads/2013/...

* Protects you from BS from the rest of the org.

* Eats the shit sandwich with you when they can't protect you.

* Can do your job if you end up rage quitting, getting sick or just needing a day off and there's an emergency.

* Can give you a candid and fair review. They can discuss your faults and successes in equal measure.

* Knows how to say NO, knows how to manage up. (sometimes hard to spot)

* Lets you earn your leash or your rope (don't hang yourself).

* Is not a gossip.

* Has good standing relationships with peers in other groups (accounting, marketing...) also hard to spot.

* Knows how to hire (skill and fit).

* Is someone you are willing to go to a social lunch with.


Good list, but...

> Can do your job if you end up rage quitting, getting sick or just needing a day off and there's an emergency.

I'm not sure that we really want our managers getting into our code.

At my company, the company actively worked against managers, being technical. I had to "sneak" my tech, by doing open-source projects, on the side (no I didn't have a "shower clause" in my employment contract).

I'd say that it's a better bet that the manager knows who to grab, and stick on your project, until the leaks get caulked.


I agree! As a manager you should probably not be coding (depends on org size). A manager doing a lot of coding is a good way to commit the sin of "making your team manage up".

However: as a manager if you can't do your staffs job you should not be managing that team. You would be unable to settle technical disputes, or properly assess your staff. You would not know who to grab and shove in the void.

So I'll restate it as such:

Can do your job if you end up rage quitting, getting sick or just needing a day off and there's an emergency. Knows enough NOT to make their team manage them.


I'd say the difference between knowing who to grab and getting your hands dirty as a manager tends to be a function of company size, with the latter being more likely the smaller the company.


* Is Superman/Spiderman/Batman, but wearing a business suit.


I've had to let go of incompetent managers (I used to manage managers). Maybe I am an incompetent manager myself that no one has discovered, so take my input with a grain of salt.

Competent managers will listen and not jump to conclusions, collaborate with you, ask thoughtful questions about your work driven by curiosity and not because they want to control or micromanage. They will usually be able to catch up and understand what you say and the technical work you do when you explain it (make an effort and you'll be surprised). If there's some tech you work on they do not understand they will educate themselves and ask a bunch of questions trying to catch up so they can help you and assess you fairly.


A more comprehensive answer about how managers are assessed at Google (via Google's project Oxygen):

- Is a good coach

- Empowers the team and does not micromanage

- Expresses interest in and concern for team members’ success and personal well-being

- Is productive and results-oriented

- Is a good communicator—listens and shares information

- Helps with career development

- Has a clear vision and strategy for the team

- Has key technical skills that help him or her advise the team

Your manager should at least be striving to excel at those. Different managers will have different strengths, but the most important thing IMHO is that they care about their team and want to do better.


If rewriting your docs and solving these hardcore engineering problems have zero impact, then you shouldn't do them in the first place. If these changes are important, then they do have impact, but the engineers may not know how to communicate it.

Learning to communicate impact is difficult, but it's a really good skill to have. Do these doc changes/engineering problems help reduce KTLO? Does it reduce on-call toil? Is it going to bring security patches? Is it going to make the system more efficient and save money? Are these frequently asked features? Do you have other people (preferably seniors) who can vouch in favour of these changes? All these things are measurable and can be communicated as impact.

There are instances where a change has 0 impact and it's still nice to have, for example, fixing a typo in the internal docs. But these changes are usually very easy to do (take less than 5 minutes), and it won't affect your other tasks. On the other hand, spending several days fixing typos everywhere may seem like a great idea, but if nobody cares about them and it does not move any needles, then you are just wasting the company's time and money. The effort you put in these no-impact changes should be a defining factor for prioritization.


> everyone ambitious has a list. And each list is nearly impossible for middle managers to evaluate.

have you worked at one of the megacorps you're talking about?

everyone has a list, because their manager gets them to write one, and it's very possible for managers to evaluate them because that is their job and they are largely reviewing their direct reports while getting bollocked by their peer managers.


Tech firms and their systems are enormously complex and interrelated. Most business impact doesn’t accrue neatly to one person. To the extent that it does, you have just bizarrely chosen to aggregate a large number of independent startups under one roof vs. build an actual organization with specialization or economies of scale. At best equal to the sum of its parts, when it should be greater than.


>Mediocre managers rely on lists

They’re just a data structure… and a useful one at that… competent managers work in diverse ways. Schedules are made up of lists of information, do 10x managers not use schedules?


This doesn't make sense.

1. The top 3% seed-stage investors are killing it. 2. This seems to be engineered for the bottom 90% of investors. To squeeze out some yield. But venture capital is ruled by the power law. 3. Forcing seed stage tech founders to think about this complexity makes no sense.


I can’t speak to the rest of the things some of you are talking about, but I can speak to this.

It really sucks realizing you’re at a 90% company instead of a 10% company. Statistically speaking you’ve not only worked for one of those, but you’ve worked for two or more in a row. Pain is information and sooner or later everyone tries to act on it. There may be nothing that can be done. Or maybe there’s another undiscovered strategy out there to make a YC for the 90%. Or even just the 25% would be huge news. People are going to search high and low for a way out. And some will continue even if someone proves mathematically that it’s impossible.

(You haven’t lived if someone hasn’t asked you to solve an uncomputsble problem, or offered to let you solve one NP-complete problem in lieu of an easier NP-complete problem).


What is to fix? Gamblers gonna gamble, humans gonna human, markets measure sentiment, both sides of the market [founders and investors] violate the 7 deadly sins all day long – greed, pride, lust, envy, sloth, wrath, gluttony – all of it


If I knew that, I’d be too busy to converse on hacker news.

I think the world would be better off if we had an 80/20 rule like the rest of the world seems to. But slow companies also deserve a bit more respect for finding the 20% in the 80%. I think that’s half undiscovered territory and half a shift in perception.


Slow companies can find that 20%, it's just that they can't raise $2m at a $12m valuation from seed-stage venture capital. The risk/return doesn't make sense.


I really appreciate that this was released to work cross-platform.


Thanks Zain :) I tried hard to make it easy to install via homebrew, nix, go, or just downloading an executable. If anyone reading this runs into a problem getting it set up, please let me know here or by filing a Github issue and I'll take a look.


I've worked at 3 startups, founded 2, and worked at 1 bigco.

2/3 startups had excellent returns in the form of equity. Life changing over time. Startups: 66% hit rate.

Bigco was mentally draining and sucked. Bigco: 0% hit rate.

Starting the first company was a failure, but the second one seems to be doing pretty well. 50% hit rate.


Love the age + free filters. First I'd heard of Swift Playgrounds! I wonder how they decided which ones to include. Also, first time I've seen learning resources broken down by apps, games, videos, etc. To be honest, it makes a lot of sense.


As PG said, take on as much technical debt as possible: it’s literally leverage.

Now sometimes “clean” code is the worst of both worlds. It wastes time and isn’t really clean.


Clarifications (this comment perpetuates some common misconceptions):

- A single bitcoin "transaction" can actually have thousands of inputs and thousands of outputs. So energy "per transaction" or "transactions per second" is not analogous to a typical monetary transaction.

- Bitcoin does not compete with literal credit card transactions (although some use it like that today). I'd compare Bitcoin on-chain transactions with how nation-states settle their central-bank ledgers with gold. Gold is the best comparison to Bitcoin because trading in hard gold is "final". Credit card transactions happen on a higher level in the financial stack. As does cash. As do bank transfers. All of these bubble down into interbank transfers that eventually settle on the base layer of central banks. So compared to shipping and securing gold, Bitcoin is quite cheap!

* Pasted and modified from an earlier comment I made on HN.


> Bitcoin does not compete with literal credit card transactions

That's, like, just your opinion. For a lot of people it competes just fine.

> Credit card transactions happen on a higher level in the financial stack. As does cash

How so? As far as settlement is concerned, a cash transaction is pretty much exactly like a bitcoin transaction (and quite unlike a credit card transaction).


>That's, like, just your opinion. For a lot of people it competes just fine.

Until it doesn't. A payment network is graded on how it handles disputes, not regular transactions. Bitcoin can't do refunds or chargebacks, making it rife for fraud.

Sure, you can implement escrow, but then it's no longer competitive like GP said.


Still limited to 1 000 000 bytes per 10 minutes and then some for segwit which was a unnecessary hack job that actually makes blocks bigger without much added throughput.


That isn't true.

In Bitcoin the block size limit was eliminated and replaced with a block weight limit which better reflects the long term operating costs for node. The raw 'size' of transactions inherently is becoming less meaningful in the long term with things like transaction compression and compact encodings.

The weight limit doesn't map perfectly to any size limit because its limiting different things, this evening most blocks have been about 1.3 MB.


Probably technically true, but, it is just a part of the dishonest language-shell-game to fool people into thinking BTC can really scale to become a real peer-to-peer electronic cash for the world's people. 1.3 MB is still tiny! Pretending a small difference matters is so disingenuous.


Blocksize is a rate.

If you were traveling at 120 MPH and then accelerated to 156 MPH you would not say that this was a small difference without consequences.

It mattered significantly, in several respects. E.g. https://bitinfocharts.com/comparison/bitcoin-transactionfees...


Blocksize is clearly not literally a rate; that's a ridiculous statement. When you artificially cap it, like putting a limiter on your car in your analogy, it can be rate limiting, i.e. limiting the transaction rate - an actual rate. That chart you posted in meaningless in this context, but clearly just greg being greg, trying to manipulate; are you seriously trying to suggest that the tiny increase from segwit shenanigans is responsible for that (cropped) decrease in tx fees? That's demonic.


It is literally a rate. It is the rate of bytes added per block (which by the system's design is once per ~10 minutes).

Increasing supply above demand radically drops fee rates. That is the logical, predicted, and observed behavior-- both in Bitcoin and in other similar systems.


Look, I'm not one to harp on semantics, but this is bullshit, you can't* just make something a rate by using the word 'per'. If I'm a doughnut shop, and I sell 12-packs of doughnuts, my doughnut-box-size is not a rate, the amount of doughnuts I sell in an hour, or a day etc., is a rate. If my store has a policy of only fulfilling orders once every 10 minutes on average, but only up to one box, or 12 doughnuts per 10 minutes, the doughnut-box-size is still not a fucking rate, even though the shop could say they only sell 12 doughnuts per 10 minutes. Edit: Let me rephrase this somewhat; it's clear greg is trying to use semantic chicanery and multiple definitions and senses of the word 'rate' to obfuscate any actual points. Rate is typically used and understood (let's say in STEM anyhow) to be a measure of 'flow'. Thus his speed example, distance per time, is a rate, or tx per second. It doesn't have to use time as measure, but here what greg is trying to do is say that, using the most generic definition of rate, you can compare doughnuts and boxes and say that doughnuts-per-box (blocksize) is a rate, even though he's really using as example "(dougnuts-per-box)-per-((10 minute)time interval)", which is a rate, and pretending they're the same thing . Of course if you increase the size of the container, the flow, or actual rate of tx/time interval will increase, but saying that the size of the container itself is a rate, is contextually insane.


Of course the fees would drop after raising the blocksize.

The current fees are well above the marginal transaction costs of processing and storing those transactions. (I estimated it was 3cents/kB, assuming GB scale blocks on ~1000 4U (36 bay) servers with 10Gbps networking distributed world-wide.) Other analysis I have seen erroneously assumes the POW is a marginal cost: which is only true with a tiny, limited, block-size.

During the September 1, 2018 "stress test" on the BCH network, the average transaction cost actually went down. All while the network processed 2 million transactions in a day.


Pretty much this. I don't think greg has read any economic theory, let alone introductory microeconomics where you would see the idea of 'marginal analysis'. I mean his reply suggests that demand is entirely static, or inelastic, which would be interesting to study, but certainly shouldn't be assumed, and is likely completely false.


> Bitcoin does not compete with literal credit card transactions

Why not?


From a slightly more nerdy perspective:

Because these credit card companies have thousands of _their_ machines, in _their_ locations, running _their_ software, to meet _their_ standards.

Meanwhile, Bitcoin is run god knows where, for god knows who (as rightfully intended of course), on god knows what software.

Sadly speed is just naturally part of the tradeoff in this scenario.


Yes this is true because Bitcoin core plays by the speed of the weakest link.

It's not true for Bitcoin Cash which plays by the "if your node is not making you money why are you running it"

When we upgrade every 6 months or useless nodes just get stuck on the old chain forever and that's it.

Last upgrade there was one miner who upgraded one block to late and lost about 1000 USD. That miner will be the first to run the new software in may.

We had a hacker who got a smart card to get Bitcoin cash to work like a credit card without needing to be online.

Bitcoin cash tx are instant and take on average about 2 seconds to spread to about 1999 out of 2000 mempools.

They settle on the chain on average 10 minutes.

Credit card tx also take a couple of seconds but much longer to settle.

Right now BCH can not yet scale like Visa but we already have the capacity to compete with paypall.

Satoshi's design works at scale but only when you don't delete point 6 from the whitepaper which is "Simple Payment Verification"

Core tries to make you belief the whitepaper was written without points 6 and 7.

7 is how to make the blockchain smaller by pruning it using merkle trees.

Nobody does that yet because storing 200 GB for 10 years is super cheap.

But 6 and 7 are ESSENTIAL in the design to scale.

Core completely ignores them or says: Well SPV is not 100% secure there for it's insecure and should not be used.

Gmax does this with everything, he flips it to extremes.

Meanwhile right now on LN there is couple of hundred thousand dollars that is very easy to steal from non technical people.

1) you find people that want to open a channel with you. These people go online to post their ip addresses and open ports on /r/bitcoin. These posts are encouraged on /r/bitcoin.

2) You open a channel with them for like 100 USD in BTC.

3) You push the balance to their side of the channel by using a swap side that accepts both LN and other coins.

4) You sell this 100 USD for another coin.

5) You publish an old state.

6) You do this to nodes you monitor using nmap to see if they go offline on a regular basis for longer then nlocktime.

7) You can't lose money on this, only win with people that should not be running LN but are.

There is like 6 million USD locked up in LN and about 10% is for grabs.

8) The victims have nowhere to go because if they post about it on their channels they get called stupid and banned and their post deleted.

9) People are already doing this but the victims are still not speaking out. They just belief it was their own fault and move on. Meanwhile the watchtower software is not there yet and if a node does not go offline for nlocktime you can easily DDOS that node for 144 blocks and you doubled your money.


Compared to credit cards:

- Bitcoin doesn't have chargebacks

- Bitcoin's base protocol transaction throughput is low

- There is a fixed cost per transaction (credit cards have low marginal costs for the credit card processor, and variable, percentage-based fees)


There is an uncorrelated cost for every transaction, it's not exactly fixed.


Credit card transactions are not immutable. This is a feature

You could build that feature on top of bitcoin, but it isn't a feature that should be built into bitcoin transactions. See the lightning network


So these cookies may die but it’s a perpetual arms race. Browser fingerprinting will (or has) replace(d) cookies for tracking purposes.


Browser fingerprinting is a hack, and exploits clear loopholes in browser privacy models.

I wouldn't rely on it because it's committing to an ongoing arms race against the browsers. One that I expect them to win.


> I wouldn't rely on it because it's committing to an ongoing arms race against the browsers. One that I expect them to win.

Don't be so sure about this. The world's most popular browser is developed by the world's largest advertising company. I'm not saying Google is intentionally sabotaging Chrome, but I doubt they're putting significant resources into anti-ad technologies.


Well, in the end it's their competitors that are hurt most when they close loopholes without warning. All chrome needs to do is hamstring ad-blockers (which they just did) and add a fingerprint that only google can use (like tying your google account to the browser for no reason...)


> Browser fingerprinting is a hack, and exploits clear loopholes in browser privacy models.

> I wouldn't rely on it because it's committing to an ongoing arms race against the browsers.

It doesn't seem to me that browsers are trying to win at all. For example, one of the greatest discriminators - font list - has been known about since people were talking about browser fingerprinting.

The fix would be pretty easy too: in incognito mode (or when toggled by the user), only support 2 fonts: 1 serif and 1 san-serif that ship with the browser on all platforms.

I don't think any of the browsers want to do that.

There are a number of other longstanding fingerprinting issues that are similarly easy to fix.


Last I checked, Safari in fact restricts the fonts web pages can see/use to ones that ship by default with MacOS. So you can't fingerprint a Safari user via fonts any further than "Safari user".

So yes, browsers, at least some of them, are in fact trying to win here.


You'd need a standardized font rendering engine to defeat fingerprinting via canvas.


"Same canvas image looks the same on every browser" seems like a desirable state of affairs to me?


I think the problem is that canvas can be GPU-accelerated, and GPUs don't have an exact standard for how each pixel will look.


> You'd need a standardized font rendering engine to defeat fingerprinting via canvas.

That's fair.

But that really only gives the attacker the OS (and perhaps the GPU vendor?). Not ideal for sure, but not that many bits of info, especially if you are in the majority (windows / intel)


> One that I expect them to win.

Sure, the basic things like "which fonts do you have installed" are easy to make consistent, but there are thousands of other ways to fingerprint a browser, many of which would have serious performance impacts if fixed. For example, Macbook Air's can only run at full CPU speed for about a second before slowing down. Just make a 2 second javascript busy loop and watch for the slowdown. Are you going to slow all users down all the time just so these macbook users can't be identified?


Doesn't Google have to leave a Chrome loophole for themselves that is less conspicuous than a specific exception for DoubleClick/Google?


The article (and Google Chrome's change) is mostly about auth cookies, not tracking ones.


What is the technical difference between an “auth” cookie and a “tracking” cookie?


None, its a usecase difference not a technical one. but samesite is designed to tackle csrf (a problem with using cookies for auth). It wont prevent user tracking.


Sure it can. Samesite cookies will prevent e.g. Google Analytics from identifying me between domains, since any samesite cookies they set for the ___domain from which they’re serving their script/pixel won’t be sent. (Presumably tracking prevention will eventually start to block cookies with samesite disabled).


Browsers have offered a "block third party cookies" setting for decades.

I'm honestly surprised none of the major browsers block third party cookies by default, it's much simpler XSRF protection than this as it doesn't rely on site developers updating and setting the new flag right.

Of course, two sites that seriously want to collaborate on user tracking (or login) can always forward the user’s whole browser window there and back, with URL parameters to synchronise first party cookies.


> as it doesn't rely on site developers updating and setting the new flag right.

Chrome is enabling this flag by default. Websites can opt out, but if they do nothing they are opted in.

Blocking third party cookies doesnt really stop csrf attacks. At most it makes the attack a bit more noticeable as it prevents some of the quieter methods of pulling off the attack. Since as far as i understand, if you submit a cross-___domain POST form, that's still a first party cookie


> (Presumably tracking prevention will eventually start to block cookies with samesite disabled).

This means the privacy advantage doesn't really come now, it instead comes at some hypothetical point in the future.


It makes it harder though.


Does it?

Websites can just opt out if they dont like samesite cookies. Even if they couldnt, its trivial for the website operator to work around if they want (and website operators are almost always in on user tracking)


For authentication, there is also the HTTP basic and digest authentication. However, I do not know that any web browser provides functions for the user to manage this authentication. (It would also make it easier for the user to configure cross-site authentication, too.)


Not sure how this is relavent, but IE had document.execCommand('ClearAuthenticationCache'); for http or TLS auth. Dont think other browsers have anything


Yes, I heard of that, although that is done by the document script and what I was asking is a command for the user to enter instead.


Http basic/digest auth doesnt last beyond the session, so just close the browser window?

Probably putting a different username in the url would work too (in non ie)


Interesting because to me, “Grey Thinking” is the norm in governments. That’s how they get away with being so wasteful.


So wasteful compared to what?


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: