Hacker News new | past | comments | ask | show | jobs | submit | more d0mine's favorites login

The last project I was working on at a major telcom before jumping ship was a project to migrate off of VMware onto KVM systems managed by Cloud Stack, they already have a major Open Stack environment but that's only for SDN stuff now.

We're talking over 10,000 ESXi hosts in the current footprint and at least 50% of them will be off VMware by the end of 2026.

If they were smart, the people they would be focusing on squeezing are the mid-sized shops that don't have the in-house skills or top level cost focus to force a migration. The small shops can't afford to stay on vmware, the very large shops will have the in-house skills or contract to the right people to get moved, it's the mid-sized people that are most stuck.


This was not what I expected to read, and therefore much sadder to me. I had expected to see an article about the emotional strain of not knowing if you're in the control arm or the treatment arm for a late-stage trial. You know that you're in an experiment, and you might get a new drug -- or existing standard of care -- depending on a roll of the dice.

This was very different. The author is describing a mad scramble to get enrolled in a phase 1 clinical trial, and discussing the scramble as a life-or-death critical choice. Even ignoring the question of whether or not you're in the control arm, these kinds of trials have no guarantee of much of anything. It's even possible the drug itself might hurt you, because that's exactly what they're looking for in phase 1. Per the clinical trial registration for PDL1V:

> Part C will use the dose found in Parts A and B to find out how safe SGN-PDL1V is and if it works to treat solid tumor cancers.

(emphasis mine)

I don't know if the author knows this, but I can't imagine the stress of not being well-informed about the purpose of an experiment like this. And those doctors who are telling him things like "SGNTV had successfully shrunk a lot of tumors" is so maddening, if they didn't follow that with "...but we don't know what it will do in humans, because it's a phase 1 trial". It's not at all clear that they did, because of things like this:

> If I was given SGNTV, I’d have three systemic lines of therapy and be ineligible for PDL1V. And the same in reverse. Sophie’s Choice!

and later:

> A downside of all these phase 1b studies is that there’s little published data, which makes comparisons difficult. Oncologists almost never give numbers: “We dosed 10 patients, and the disease control rate in phase 1a was 50%.” Instead, they’ll be vague, but they’ll also indicate why they think their top trials are their top trials.

If true, this is so, so, so irresponsible. Tell people that you don't know the answer! The control could easily be better than the drug! Don't give them false hope!

So agonizing.


Ex-Google search engineer here (2019-2023). I know a lot of the veteran engineers were upset when Ben Gomes got shunted off. Probably the bigger change, from what I've heard, was losing Amit Singhal who led Search until 2016. Amit fought against creeping complexity. There is a semi-famous internal document he wrote where he argued against the other search leads that Google should use less machine-learning, or at least contain it as much as possible, so that ranking stays debuggable and understandable by human search engineers. My impression is that since he left complexity exploded, with every team launching as many deep learning projects as they can (just like every other large tech company has).

The problem though, is the older systems had obvious problems, while the newer systems have hidden bugs and conceptual issues which often don't show up in the metrics, and which compound over time as more complexity is layered on. For example: I found an off by 1 error deep in a formula from an old launch that has been reordering top results for 15% of queries since 2015. I handed it off when I left but have no idea whether anyone actually fixed it or not.

I wrote up all of the search bugs I was aware of in an internal document called "second page navboost", so if anyone working on search at Google reads this and needs a launch go check it out.


Life and its rewards aren't perfect. Work with honest, intelligent people; genuinely do your best; your days will be much better and the odds will be with you.

Check out The Science of Programming Matrix Computations by Robert A. van de Geijn and Enrique S. Quintana-Ort. Chapter 5 walks through how to write an optimized GEMM. It involves clever use of block multiplication, choosing block sizes for optimal cache behavior for specific chips. Modern compilers just aren't able to do such things now. I've spent a little time debugging things in scipy.linalg by swapping out OpenBLAS with reference BLAS and have found the slowdown from using reference BLAS is typically at least an order of magnitude.

[0] https://www.cs.utexas.edu/users/rvdg/tmp/TSoPMC.pdf


Let me see if I'm the first one to link to that classic story in the same series, "I cannot send email further than 500 miles"

http://www.ibiblio.org/harris/500milemail.html

Or the Magic/More Magic switch

http://www.catb.org/jargon/html/magic-story.html

It's fun when physical reality meets the abstract models that we have built in our heads of these machines.


Worth pointing out that all third party deps at Google have to be vendored. By policy. And generally only one version of said dep is permitted for the entire massive monorepo. (This may have changed since I was there.) And each third_party dep has a set of designated responsible individuals / OWNERS.

This has the effect of keeping things relatively sane.

But this enforced discipline can work well at an org like Google that has plenty of time and $$ and isn't dedicated to a "move fast and break things" philosophy. Not sure how well it would work for a startup.


>I was once in a meeting, with four people, plus the CTO. The CTO forbade us to take notes: it seems the fad of the week was that note taking is what makes meetings a waste of time. The meeting took two hours. Afterwards, the four of us had about eight different opinions of what had been decided. No follow-up actions were ever taken.

This is still the case in many companies I (founder of a small troubleshooting company; we search out problematic large, cash rich companies like these; they all have failing processes and IT all over the place) meet with; sometimes it's frowned upon, but often simply no one takes notes. Which shows these meetings (almost all meetings I go into) are a complete charade of managers wanting to show they have 'something important to do, really!' (somehow there are often 10+ people there). I am in meetings (including Zoom etc and irl) where i'm sure no-one really heard or understood what anyone else said (different accents of English from different countries and background and of course, no one can say anything because we all have to respect people etc); no notes, no recordings (and of course, the captions of the video chat or my phone didn't understand anything that was said either), while someone was explaining quite difficult, in depth stuff, for 2-3 hours. Afterwards, the stuff is rehashed in a 15 minute text chat with the person who explained the difficult stuff; why didn't they write it down in the first place and forgo the meeting? Because half(that's generous, it's more like 80%) of the room should have no (high paying) job; they are just there for being there.

Ah, the enterprise world, such joys; I enjoy it as I threat it like a cosmic joke; it's a comedy show, not unlike The Office including the main characters high pay.


> I haven't heard of people having problems [with S3's Durability] but equally: I've never seen these claims tested. I am at least a bit curious about these claims.

Believe the hype. S3's durability is industry leading and traditional file systems don't compare. It's not just the software - it's the physical infrastructure and safety culture.

AWS' availability zone isolation is better than the other cloud providers. When I worked at S3, customers would beat us up over pricing compared to GCP blob storage, but the comparison was unfair because Google would store your data in the same building (or maybe different rooms of the same building) - not with the separation AWS did.

The entire organization was unbelievably paranoid about data integrity (checksum all the things) and bigger events like natural disasters. S3 even operates at a scale where we could detect "bitrot" - random bit flips caused by gamma rays hitting a hard drive platter (roughly one per second across trillions of objects iirc). We even measured failure rates by hard drive vendor/vintage to minimize the chance of data loss if a batch of disks went bad.

I wouldn't store critical data anywhere else.

Source: I wrote the S3 placement system.


When I was in Search 15 or so years ago, there was actually a very direct cost: revenue.

The AdMixer was an "optional" response for the search page. If the ads didn't return before the search results did, the search would just not show ads, and Google wouldn't get any revenue for it. Showed the premium that Google of the day put on latency and user experience. I think we lost a few million per year to timeouts, but it was worth it for generating user loyalty, and it put a very big incentive on the ads team to keep the serving stack fast.

No idea if it's still architected like that, I kinda doubt it given recent search experiences, but I thought it was brilliant just for the sake of aligning incentives between different parts of the organization.


It's not at "an accident" and Go didn't "somehow" fail to replace C++ at its systems programming ___domain. The reason why go failed to replace C and C++ is not a mystery to anyone: Mandatory GC and a rather heavyweight runtime.

When the performance overhead of having a GC is less significant than the cognitive overhead of dealing with manual memory management (or the Rust borrow checker), Go was quite successful: Command line tools and network programs.

Around the time Go was released, it was certainly touted by its creators as a "systems programming language"[1] and a "replacement for C++"[2], but re-evaluating the Go team's claims, I think they didn't quite mean in the way most of us interpreted them.

1. The Go Team members were using "systems programming language" in a very wide sense, that include everything that is not scripting or web. I hate this defintion with passion, since it relies on nothing but pure elitism ("Systems language are languages that REAL programmers uses, unlike those "Scripting Languages"). Ironically, this usage seems to originate from John Ousterhout[3], who is himself famous for designing a scripting language (Tcl).

Ousterhout's definition of "system programming language" is: Designed to write applications from scratch (not just "glue code"), performant, strongly typed, designed for building data structures and algorithms from scratch, often provide higher-level facilities such as objects and threads.

Ousterhout's definition was outdated even back in 2009, when Go was released, let alone today. Some dynamic languages (such as Python with type hints or TypeScript) are more strongly typed than C or even Java (with its type erasure). Typing is optional, but so it is in Java (Object), and C (void*, casting). When we talk about the archetypical "strongly typed" language today we would refer to Haskell or Scala rather than C. Scripting languages like Python and JavaScript were already commonly used "for writing applications from scratch" back in 2009, and far from being ill-adapted for writing data structures and algorithms from scratch, Python became the most common language that universities are using for teaching data structures and algorithms! The most popular dynamic languages nowadays (Ruby, Python, JavaScript) all have objects, and 2 out of 3 (Python and Ruby) have threads (although GIL makes using threads problematic in the mainstream runtimes). The only real differentiator that remains is raw performance.

The widely accepted definition of a "systems language" today is "a language that can be used to systems software". Systems software are either operating systems or OS-adjacent software such as device drivers, debuggers, hypervisors or even complex beasts like a web browser. The closest software that Go can claim in this category is Docker, but Docker itself is just a complex wrapper around Linux kernel features such as namespaces and cgroups. The actual containerization is done by these features which are implemented in C.

During the first years of Go, the Go language team was confronted on golang-nuts by people who wanted to use go for writing systems software and they usually evaded directly answering these questions. When pressed, they would admit that Go is not ready for writing OS kernels, at least not now[4][5][6], but GC could be disabled if you want to[7] (of course, there isn't any way to free memory then, so it's kinda moot). Eventually, the team came to a conclusion that disabling GC is not meant for production use[8][9], but that was not apparent in the early days.

Eventually the references for "systems language" disappeared from Go's official homepage and one team member (Andrew Gerrand) even admitted this branding was a mistake[10].

In hindsight, I think the main "systems programming task" that Rob Pike and other members at the Go team envisioned was the main task that Google needed: writing highly concurrent server code.

2. The Go Team members sometimes mentioned replacing C and C++, but only in the context of specific pain points that made "programming in the large" cumbersome with C++: build speed, dependency management and different programmers using different subsets. I couldn't find any claim that go was meant as a general replacement for C and C++ anywhere from the Go Team, but the media and the wider programming community generally took Go as a replacement language for C and C++.

When you read through the lines, it becomes clear that the C++ replacement angle is more about Google than it is about Go. It seems that in 2009, Google was using C++ as the primary language for writing web servers. For the rest of the industry, Java was (and perhaps still is) the most popular language for this task, with some companies opting for dynamic languages like Python, PHP and Ruby where performance allowed.

Go was a great fit for high-concurrency servers, especially back in 2009. Dynamic languages were slower and lacked native support for concurrency (if you put aside Lua, which never got popular for server programming for other reasons). Some of these languages had threads, but these were unworkable due to GIL. The closest thing was frameworks Twisted, but they were fully asynchronous and quite hard to use.

Popular static languages like Java and C# were also inconvenient, but in a different way. Both of these languages were fully capable of writing high-performance servers, but they were not properly tuned for this use case by default. The common frameworks of the day (Spring, Java EE and ASP.net) introduced copious amounts of overhead, and the GC was optimized for high throughput, but it had very bad tail latency (GC pauses) and generally required large heap sizes to be efficient. Dependency management, build and deployment was also an afterthought. Java had Maven and Ivy and .Net had NuGet (in 2010) and MSBuild, but these where quite cumbersome to use. Deployment was quite messy, with different packaging methods (multiple JAR files with classpath, WAR files, EAR files) and making sure the runtime on the server is compatible with your application. Most enthusiasts and many startups just gave up on Java entirely.

The mass migration of dynamic language programmers to Go was surprising for the Go team, but in hindsight it's pretty obvious. They were concerned about performance, but didn't feel like they had a choice: Java was just too complex and Enterprisey for them, and eeking out performance out of Java was not an task easy either. Go, on the other hand, had the simplest deployment model (a single binary), no need for fine tuning and it had a lot of built-in tooling from day one ("gofmt", "godoc", "gotest", cross compilation) and other important tools ("govet", "goprof" and "goinstall" which was later broken into "go get" and "go install") were added within one year of its initial release.

The Go team did expect server programs to be the main use for Go and this is what they were targeting at Google. They just missed that the bulk of new servers outside of Google were being written in dynamic languages or Java.

The other "surprising use" of Go was for writing command-line utilities. I'm not sure if the original Go team were thinking about that, but it is also quite obvious in hindsight. Go was just so much easier to distribute than any alternative available at the time. Scripting languages like Python, Ruby or Perl had great libraries for writing CLI programs, but distributing your program along with its dependencies and making sure the runtime and dependencies match what you needed was practically impossible without essentially packaging your app for every single OS and distro out there or relying on the user to be a to install the correct version of Python or Ruby and then use gem or pip to install your package. Java and .NET had slow start times due to their VM, so they were horrible candidates even if you'd solve the dependency issues. So the best solution was usually C or C++ with either the "./configure && ./make install" pattern or making a static binary - both solutions were quite horrible. Go was a winner again: it produced fully static binaries by default and had easy-to-use cross compilation out of the box. Even creating a native package for Linux distros was a lot easier, so all you add to do is package a static binary.

[1]: https://opensource.googleblog.com/2009/11/hey-ho-lets-go.htm...

[2]: https://web.archive.org/web/20091114043422/http://www.golang...

[3]: https://users.ece.utexas.edu/~adnan/top/ousterhout-scripting...

[4]: https://groups.google.com/g/golang-nuts/c/6vvOzYyDkWQ/m/3T1D...

[5]: https://groups.google.com/g/golang-nuts/c/BO1vBge4L-o/m/lU1_...

[6]: https://groups.google.com/g/golang-nuts/c/UgbTmOXZ_yw/m/NH0j...

[7]: https://groups.google.com/g/golang-nuts/c/UgbTmOXZ_yw/m/M9r1...

[8]: https://groups.google.com/g/golang-nuts/c/qKB9h_pS1p8/m/1NlO...

[9]: https://github.com/golang/go/issues/13761#issuecomment-16772...

[10]: https://go.dev/talks/2011/Real_World_Go.pdf (Slide #25)


I worked on Fuchsia for many years, and maintained the Go fork for a good while. Fuchsia shipped with the gvisor based (go) netstack to google home devices.

The Go fork was a pain for a number of reasons, some were history, but more deeply the plan for fixing that was complicated due to the runtime making fairly core architectural assumptions that the world has fd's and epoll-like behavior. Those constraints cause challenges even for current systems, and even for Linux where you may not want to be constrained by that anymore. Eventually Fuchsia abandoned Go for new software because folks hired to rewrite the integration ran out of motivation to do so, and the properties of the runtime as-written presented atrocious performance on a power/performance curve - not suitable for battery based devices. Binary sizes also made integration into storage constrained systems more painful, and without a large number of components written in the language to bundle together the build size is too large. Rust and C++ also often produce large binaries, but they can be substantially mitigated with dynamic linking provided you have a strong package system that avoids the ABI problem as Fuchsia does.

The cost of crossing the cgo/syscall boundary remains high, and got higher over the time that Fuchsia was in major development due to the increased cost of spectre and meltdown mitigations.

The cgo/syscall boundary cost shows up in my current job a lot too, where we do things like talk to sqlite constantly for small objects or shuffle small packets of less than or equal to common mtu sizes. Go is slow at these things in the same way that other managed runtimes are - for the same reasons. It's hard to integrate foreign APIs unless the standard library already integrated them in the core APIs - something the team will only do for common use cases (reasonably so, but annoying when you're stuck fighting it constantly). There are quite a few measures like this where Go has a high cost of implementation for lower level problems - problems that involve high frequency integration with surrounding systems. Go has a lower cost of ownership when you can pass very large buffers in or out of the program and do lots of work on them, and when your concurrency models fit the channel/goroutine model ok. If you have a problem that involves higher frequency operations, or more interesting targets, you'll find the lack of broader atomics, the inability to cheaply or precisely schedule work problematic.


In the mid 90s I worked for a small ISV that wrote at GIS that ran on windows 3.1 (a 32 bit operating system that ran 16 bit processes (that could have 32 bit code segments))

One of our customer required perfect uptime, which was a problem with the GetTickCount wraparound described here

Our solution was to run under NuMega SoftICE [1] and once a month, or so, dispatch an engineer to the customer to simultaneously patch the kernel value for the tick count and also the expected value in our software (and also clean up various handle based cruft)

This worked for years. Unfortunately the engineer in question was also an alcoholic, so a particularly spectacular bender spoiled an approx 3 year uptime

[1] https://en.wikipedia.org/wiki/SoftICE


Don't do it right, double down on the flaws. You had a 3k SLOC single function (all in main) C program to do something that could be expressed cleanly and clearly in 200 SLOC. Some specific sequence of inputs leads to an error. Instead of tidying it up, removing the repetition that led to the mistake, you copy/paste everything again and add another 100 cases to your various switch/case statements (actually you use if/else because switch/case might make things clearer). The specific problem is solved, but in a year another buggy code path will be discovered and you'll have another chance to play hero. In 5 years it'll be 50k SLOC of C all in main, that could have been under 1k SLOC (still all in main). No one else will be able to fix it but you!

There's a famous green text on 4chan, where a user tells he got a sysadmin job which was so boring he started crashing stuff left and right and blocking an entire office even for entire days.

By the end of the day he would plug back something and come out the "servers room" saying he fixed that and get everybody's praise. Even got him two raises in the span of 18 months.

That's how crazy it is.

I know a variation of this sort of story, where a good sysadmin/DevOps team was halved and then the problems started. The company didn't have those issues exactly because they had a good surplus of eyes to handle everything.

They realized only later the mistake.


Easy - half a$$ your code and when it breaks - swoop in, "fix things" (actually do it right) and play the role of hero! (I've seen so-called "Rock Stars" at places I worked do this over and over)

At a large insurance company I worked for, management set a target of 80% test coverage across our entire codebase. So people started writing stupid unit tests for getters and setters in Java DTOs to reach the goal. Of course devs also weren't allowed to change the coverage measuring rules in Sonar.

As a young dev, it taught me that focusing only on KPIs can sometimes drive behaviors that don't align with the intended goals. A few well-thought out E2E test scenarios would probably have had a better impact on the software quality.


My favorite anecdote on the topic involved a codebase like this, handled by inexperienced programmers. I came into the team, and realized that a whole lot of careless logic could be massively simplified, so I sent a PR cutting the codebase by 20%, and still passing all the tests and meeting user requirements.

The problem is, the ugly, careless code extremely very well tested: 95% code coverage. My replacement had 100% code coverage... but by being far shorter, my PR couldn't pass tests, as total coverage went down, not up. The remaining code in the repo? A bunch of Swing UI code, the kind that is hard to test, and where the test don't mean anything. So facing the prospect of spending a week or two writing swing test, the dev lead decided that it was best to keep the old code somewhere in the repo, with tests pointing at it, just never called in production.

Thousands of lines of completely dead, but very well covered code were kept in the repo to keep Sonar happy.


This is like a "water is wet" post for me. Maybe it is because I have been doing this so long... maybe for another reason. But people need to hear it.

The important part is buried at the bottom and I'll re-summarize in less words:

Start with high code quality standards [with proficient, experienced devs] delivering the minimal amount of functionality to ship.

Then rinse and repeat.

With experience comes a developed taste for the general work of software development and a better discernment for where to draw the lines. How to decouple and how to recognize when the decoupling is wrong and refactor. And how to refactor appropriately without making up fear based excuses.

If you are interested in more checkout "A Philosophy of Software Design" by Ousterhout, "Righting Software" by Löwy and "Mythical Man Month" by Brooks.


As a YC alum who worked with Gary Tan directly, this is… odd.

Anyone who knows Gary knows he’s a (relatively) gentle human being. I can’t imagine him hurting a fly.

His tweets seem totally out of character compared to the Gary Tan I personally knew. Maybe he has changed?

I’m inclined to give him the benefit of the doubt.


Maybe relevant blogpost is https://devblogs.microsoft.com/python/python-311-faster-cpyt... dated Oct 2022. The team behind this and some other recent improvements to Python are at Microsoft.

I used Go for many years. My issue is that it's _almost_ a great language, but in its current version it's just a collection of foot guns that makes it difficult to get shit done.

Go doesn't have some of the most library functions, so large codebases shared between teams end up with a dozen different implementations of functions like "minimum" or "filter". Good luck debugging a bug in one of the implementations.

The exception-less error handling would be great if they used sum types instead of (val, error) tuples. Return types are required to have a "default" value if you want to return an error, and good luck finding bugs where you use that value and forget the "if err != nil"`.

Worst of all, they removed most "fun" C things about pointers (like subtracting pointers in the same array) but kept the null pointers themselves. There's no way to ask for a not-null pointer at the type level, so you have to check for nullity everywhere and good luck debugging those runtime panics.


Anything about Git reminds me of this:

https://youtu.be/EReooAZoMO0?si=sHqcYsf8v6LyWLAx


Let's face it: IT is the bureaucracy department of modern times which can keep itself 95% busy with self inflicted problems and has 5% service orientation. Processes are opaque for outsiders and typically not helpful.

I really had to lough when I read the following description of the IBM BPM but this sums up a good part of the issue:

  "...while IBM BPM does come with a REST API, this REST API is borderline useless to Technology teams and SMEs

  Some REST calls use javascript encoded as strings Others require html embedded in json embedded in xml
  
  Database tables aren’t queried by name but by GUID.

  There’s no documentation of which GUID relates to which table/process.*"
Quite a lot of things became so outragedly complex no one outside of the IT bothers to handle these, and sometimes not even inside IT. It started with AJAX where suddenly half of the development effort went into designing frontend code and backend services, which honestly does not even touch the end users automation problem. And it went further downhill afterwards. UIs nowadays look modern but are generally as user hostile as the technology stack used to produce these.

In Excel my UI is just "there", I have a nice code generator aka as macro recorder, no IT department questioning my authorization to do something nor does not have time or budget to help me with my business problem.

So VBA is the workaround for users around the IT department. Not perfect, but better than what you would get else.


Because the terminology is tied into the code in ways that would cause incompatibility with past code if changed. For example, emacs uses "frame" to mean what we now call a "window" and vice versa, and all the functions for interacting with windows and frames reflect this usage. If one just did a blanket search-and-replace swap of window and frame throughout the entire Emacs codebase and documentation, you would probably have a working Emacs, but all external packages and customization code that interact with windows or frames in any way would now be broken. You can't just update the documentation but leave the code as is, because then you have to start telling people ridiculous things like "switch windows with 'other-frame' and switch frames with 'other-window'", which isn't going to be any less confusing that what you started with. Elisp is a dynamic language, so you can't set up some sort of name-translation layer and use it for backward-compatibility with old code (that code could have a reference to an old name inside a quoted form, or even a string).

And the window/frame term swap is only one of many such examples. So basically, you could make your own Emacs that uses modern terminology and is completely incompatible with all existing 3rd-party elisp code, and then you could try to convince people to switch to it. But would having two equivalent but fundamentally incompatible Emacs/elisp variants be beneficial to Emacs as a whole? I doubt it.


It looks like you think writing your program in the language used to express types will magically remove "runtime bugs"

If the language is powerful enough to be able to write ordinary programs, it is powerful enough to produce bugs.

Static typing can be effective at catching certain types of bug, not all. It can improve readability sometimes (as a DSL for static unit tests/executable docs).

In general, dynamic languages are more agile and you can write more tests easier. Some of the tests you wouldn't need to write in a statically typed language, therefore typing is still useful though not as universally effective as one might believe.


The article has some helpful points. But as a programmer-SAAS-founder-who-took-over-ads operation, I have some tips on some insights we gleaned doing paid ads (and getting it to be profitable for us):

1. Most important tip: is your product ready for ads?

  - Do not do paid ads too early.

  - Do it once you know that your product is compelling to your target audience.

  - Ads are likely an expensive way of putting your product in front of an audience.

    - No matter how good the ad operation, unless your product can convince a user to stay and explore it further, you've just gifted money to Google/X/Meta whoever.

  - If you haven't already, sometimes when you think you want ads, what you more likely and more urgently need is better SEO optimization
2. The quality of your ad is important, but your on-boarding flows are way more important still.

  - Most of the time, when we debugged why an ad wasn't showing conversions, rather than anything inherent to the ad, we found that it was the flows the user encountered _AFTER_ landing on the platform that made the performance suffer.

  - In some cases, it's quite trivial: eg. one of our ads were performing poorly because the conversion criterion was a user login. And the login button ended up _slightly_ below the first 'fold' or view that a user saw. That tiny scroll we took for granted killed performance.
3. As a founder, learn the basics

  - This is not rocket science, no matter how complex an agency/ad expert may make it look.

  - There are some basic jargon that will be thrown around ('Target CPA', 'CPC', 'CTR', 'Impression share'); don't be intimidated

   - Take the time to dig into the details

   - They are not complicated and are worth your time especially as an early stage startup

  - Don't assume that your 'Ad expert' or 'Ad agency' has 'got this'.

    - At least early on, monitor the vital stats closely on weekly reviews

  - Ad agencies especially struggle with understanding nuances of your business. So make sure to help them in early days.
4. Targeting Awareness/Consideration/Conversion

  - Here I have to politely disagree with the article

  - Focus on conversion keywords exclusively to begin with!

  - These will give you low volume traffic, but the quality will likely be much higher

  - Conversion keywords are also a great way to lock down the basics of your ad operation before blowing money on broad match 'awareness' keywords

  - Most importantly, unless your competition is play dirty and advertising on your branded keywords, don't do it.

    - Do NOT advertise on your own branded keywords, at least to begin with.

    - Most of the audience that used your brand keywords to get to your site are essentially just repeat users using your ad as the quickest navigation link. Yikes!
5. Plug the leaks, set tight spend limits

  - You'll find that while your running ads, you are in a somewhat adversarial dance with the ads platform

  - Some caveats (also mentioned in the article)

    - Ad reps (mostly) give poor advice, sometimes on borderline bad faith. We quickly learnt to disregard most of what they say. (But be polite, they're trying to make a living and they don't work for you.)

    - (Also mentioned in the article) Do not accept any 'auto optimization' options from the ads platform. They mostly don't work.

  - Set tight limits on spends for EVERYTHING in the beginning. I cannot emphasize this enough. Start small and slowly and incrementally crank up numbers, whether it be spend limits per ad group, target CPA values, CPC values - whatever. Patience is a big virtue here

    - If you're running display ads, there are many more leaks to be plugged: disallow apps if you can (article mentions why), and disallow scammy sites that place ads strategically to get stray clicks.

    - For display ads, controlling 'placement' also helps a lot
6. Read up `r/PPC` on Reddit

  - Especially the old, well rated posts here. 

  - They're a gold mine of war stories from other people who got burnt doing PPC, whose mistakes you can avoid.

Why sukilot's comment is grayed out? It looks like an honest attempt to summarize the math paper.

The other interesting thing about the true command is how much more complicated it got then it needed to be.

first an exercise

  touch mytrue
  chmod u+x mytrue
  ./mytrue
  echo "error code for mytrue is $?"
  
This is literally how true started life. yes it is very zen.

The first offense was legal. All code had to have a copyright disclaimer. even an empty file? Yes. so now it was a file with a copyright disclaimer and nothing else. And the koan-like question comes to mind is "Can you copyright nothing?" well AT&T sure tried.

Then somebody said our programs should be well defined and not depend on a fluke of unix, which at this point was probably a good idea. So true finally had code. It was "exit 0"

Then somebody said we should write our system utilities in C instead of shell so it runs faster. openbsd still has a good example of how this would look.

http://cvsweb.openbsd.org/cgi-bin/cvsweb/~checkout~/src/usr....

At some point gnu bureaucracy got involved and said all programs must support the '-h' flag. so that got added, then they said all programs must support locale so that got added. now days gnu true is an astonishing 80 lines long.

https://github.com/coreutils/coreutils/blob/master/src/true....

Which is fine I guess, but that is a lot of code for a program that by definition "Does nothing, successfully"

http://trillian.mit.edu/~jc/humor/ATT_Copyright_true.html


Comments like this always remind me of Tom Toro’s cartoon:

> Yes, the planet got destroyed. But for a beautiful moment in time we created a lot of value for shareholders.

https://www.newyorker.com/cartoon/a16995

Technology and innovation should serve the greater good. Unbridled growth for a few companies at the expense of the people is not a positive goal.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: