Hacker News new | past | comments | ask | show | jobs | submit login

> So if you as an app developer include such a 3rd party SDK in your app to make some money — you are part of the problem and I think you should be held responsible for delivering malware to your users, making them botnet members.

I suspect that this goes for many different SDKs. Personally, I am really, really sick of hearing "That's a solved problem!", whenever I mention that I tend to "roll my own," as opposed to including some dependency, recommended by some jargon-addled dependency addict.

Bad actors love the dependency addiction of modern developers, and have learned to set some pretty clever traps.




I’m constantly amazed at how careless developers are with pulling 3rd party libraries into their code. Have you audited this code? Do you know everything it does? Do you know what security vulnerabilities exist in it? On what basis do you trust it to do what it says it is doing and nothing else?

But nobody seems to do this diligence. It’s just “we are in a rush. we need X. dependency does X. let’s use X.” and that’s it!


> Have you audited this code?

Wrong question. “Are you paid to audit this code?” And “if you fail to audit this code, who’se problem is it?”


I think developers are paid to competently deliver software to their employer, and part of that competence is properly vetting the code you are delivering. If I wrote code that ended up having serious bugs like crashing, I’d expect to have at least a minimum consequence, like root causing it and/or writing a postmortem to help avoid it in the future. Same as I’d expect if I pulled in a bad dependency.


Your expectations do not match the employment market as I have ever experienced it.

Have you ever worked anywhere that said "go ahead and slow down on delivering product features that drive business value so you can audit the code of your dependencies, that's fine, we'll wait"?

I haven't.


Yea, and that’s the problem. If such absolute rock bottom minimal expectations (know what the code does) are seen as too slow and onerous, the industry is cooked!


Yeah, about that, businesses are pushing and introducing code written by AI/LLM now, so now you won't even know what your own code does.


Due diligence is a sliding scale. Work at a webdev agency is "get it done as fast as possible for this MVP we need". Work at NASA or a biomedical device company? Every line of code is triple-checked. It's entirely dependent on the cost/benefit analysis.


"who'se" is wild.


If a car manufacturer sources a part from a third party, and that part has a serious safety problem, who will the customer blame? And who will be responsible for the recall and the repairs?


But we aren’t car business, am we are in joker business.

When was the last time producer of an app was held legally accountable for negligence, had to pay compensation and damages, etc?


Malware, botnets… it is very similar. And people including developers are - in 80 per cent - eagier to make money, because… Is greed good? No, it isn’t. It is a plague.


You're a developer who devoted time to develop a piece of software. You discover that you are not generating any income from it: few people can even find it in the sea of similar apps, few of those are willing to pay for it, and those who are willing to pay for it are not willing to pay much. To make matters worse, you're going to lose a cut of what is paid to the middlemen who facilitate the transaction.

Is that greed?

I can find many reasons to be critical of that developer, things like creating a product for a market segment that is saturated, and likely doing so because it is low hanging fruit (both conceptually and in terms of complexity). I can be critical of their moral judgement for how they decided to generate income from their poor business judgment. But I don't thinks it's right to automatically label them as greedy. They may be greedy, but they may also be trying to generate income from their work.


> Is that greed?

Umm, yes? You are not owed anything in this life, certainly not income for your choice to spend your time on building a software product no one asked for. Not making money on it is a perfectly fine outcome. If you desperately need guaranteed money, don't build an app expecting it to sell; get a job.


> If you desperately need guaranteed money, don't build an app expecting it to sell; get a job.

Technically true but a bit of perspective might help. The consumer market is distorted by free (as in beer) apps that does a bunch of shitty things that should in many cases be illegal or require much more informed consent than today, like tracking everything they can. Then you have VC funded ”free” as well, where the end game is to raise prices slowly to boil the frog. Then you have loss leaders from megacorps, and a general anti-competitive business culture.

Plus, this is not just in the Wild West shady places, like the old piratebay ads. The top result for ”timer” on the App Store (for me) is indeed a timer app, but with IAP of $800/y subscription… facilitated by Apple Inc, who gets 15-30% of the bounty.

Look, the point is it’s almost impossible to break into consumer markets because everyone else is a predator. It’s a race to the bottom, ripping off clueless customers. Everyone would benefit from a fairer market. Especially honest developers.


>$800/year IAP

That’s got to be money laundering or something else illicit? No one is actually paying that for a timer app?


No I think it’s designed to catch misclicks and children operating the phone and such, sold as $17/week possibly masquerading as one-time payment. They pay for App Store ads for it too.


I prefer to focus on the technical shortcomings.

We could have people ask for software in a more convenient way.

Not making money could be an indication the software isn't useful, but what if it is? What can the collective do in that zone?

I imagine one could ask and pay for unwritten software then get a refund if it doesn't materialize before your deadline.

Why is discovery (of many creation) willingly handed over to a hand full of mega corps?? They seem to think I want to watch and read about Trump and Elon every day.

Promoting something because it is good is a great example of a good thing that shouldn't pay.


There was an earlier discussion on HN about whether advertising should be more heavily regulated (or even banned outright). I'm starting to wonder whether most of the problems on the Web are negative side effects of the incentives created by ads (including all botnets, except those that enable ransomeware and espionage). Even the current worldwide dopamine addition is driven by apps and content created for engagement, whose entire purpose is ad revenue.


This is especially true for script kiddies, which is why I am so thankful for https://e18e.dev/

AI is making this worse than ever though, I am constantly having to tell devs that their work is failing to meet requirements, because AI is just as bad as a junior dev when it comes to reaching for a dependency. It’s like we need training wheels for the prompts juniors are allowed to write.


These are kind of separate issues. Apps using Infatica know that they're selling access to their users' bandwidth. It's intentional.


That may be true but I think you're missing the point here.

The "network sharing" behavior in these SDKs is the sole purpose of the SDK. It isn't being included as a surprise along with some other desirable behavior. What needs to stop is developers including these SDKs as a secondary revenue source in free or ad-supported apps.


> I think you're missing the point here

Doubt it. This is just one -of many- carrots that are used to entice developers to include dodgy software into their apps.

The problem is a lot bigger than these libraries. It's an endemic cultural issue. Much more difficult to quantify or fix.


I agree that there are things with too many dependencies and I try to avoid that. I think it is a good idea to minimize how many dependencies are needed (even indirect dependencies; however, in some cases a dependency is not a specific implementation, and in that case indirect dependencies are less of a problem, although having a good implementation with less indirect dependencies is still beneficial). I may write my own, in many cases. However, another reason for writing my own is because of other kind of problems in the existing programs. Not all problems are malicious; many are just that they do not do what I need, or do too much more than what I need, or both. (However, most of my stuff is C rather than JavaScript; the problem seems to be more severe with JavaScript, but I do not use that much.)


"Bad actors love the dependency addiction of modern developers"

Brings a new meaning to dependency injection.


I mean, as far as patterns go, dependency injection is also quite bad.


I have found that the dependency injection pattern makes it far easier to write clean tests for my code.


Elaborate on this please. It seems a great boon in having pushed the OO world towards more functional principles, but I'm willing to hear dissent.


How is dependency injection more functional?

My personal beef is that most of the time it acts like hidden global dependencies, and the configuration of those dependencies, along with their lifetimes, becomes harder to understand by not being traceable in the source code.


Dependency injection is just passing your dependencies in as constructor arguments rather than as hidden dependencies that the class itself creates and manages.

It's equivalent to partial application.

An uninstantiated class that follows the dependency injection pattern is equivalent to a family of functions with N+Mk arguments, where Mk is the number of parameters in method k.

Upon instantiation by passing constructor arguments, you've created a family of functions each with a distinct sets of Mk parameters, and N arguments in common.


> Dependency injection is just passing your dependencies in as constructor arguments rather than as hidden dependencies that the class itself creates and manages.

That's the best way to think of it fundamentally. But the main implication of that which is at some point something has to know how to resolve those dependencies - i.e. they can't just be constructed and then injected from magic land. So global cradles/resolvers/containers/injectors/providers (depending on your language and framework) are also typically part and parcel of DI, and that can have some big implications on the structure of your code that some people don't like. Also you can inject functions and methods not just constructors.


That's because those containers are convenient to use. If you don't like using them, you can configure the entire application statically from your program's entry point if you prefer.


I don't understand what you're describing has to do with dependency injection. See https://news.ycombinator.com/item?id=43740196.


> Dependency injection is just passing your dependencies in as constructor arguments rather than as hidden dependencies that the class itself creates and manages.

This is all well and good, but you also need a bunch of code that handles resolving those dependencies, which oftentimes ends up being complex and hard to debug and will also cause runtime errors instead of compile time errors, which I find to be more or less unacceptable.

Edit: to elaborate on this, I’ve seen DI frameworks not be used in “enterprise” projects a grand total of zero times. I’ve done DI directly in personal projects and it was fine, but in most cases you don’t get to make that choice.

Just last week, when working on a Java project that’s been around for a decade or so, there were issues after migrating it from Spring to Spring Boot - when compiled through the IDE and with the configuration to allow lazy dependency resolution it would work (too many circular dependencies to change the code instead), but when built within a container by Maven that same exact code and configuration would no longer work and injection would fail.

I’m hoping it’s not one of those weird JDK platform bugs but rather an issue with how the codebase is compiled during the container image build, but the issue is mind boggling. More fun, if you take the .jar that’s built in the IDE and put it in the container, then everything works, otherwise it doesn’t. No compilation warnings, most of the startup is fine, but if you build it in the container, you get a DI runtime error about no lazy resolution being enabled even if you hardcode the setting to be on in Java code: https://docs.spring.io/spring-boot/api/kotlin/spring-boot-pr...

I’ve also seen similar issues before containers, where locally it would run on Jetty and use Tomcat on server environments, leading to everything compiling and working locally but throwing injection errors on the server.

What’s more, it’s not like you can (easily) put a breakpoint on whatever is trying to inject the dependencies - after years of Java and Spring I grow more and more convinced that anything that doesn’t generate code that you can inspect directly (e.g. how you can look at a generated MapStruct mapper implementation) is somewhat user hostile and will complicate things. At least modern Spring Boot is good in that more of the configuration is just code, because otherwise good luck debugging why some XML configuration is acting weird.

In other words, DI can make things more messy due to a bunch of technical factors around how it’s implemented (also good luck reading those stack traces), albeit even in the case of Java something like Dagger feels more sane https://dagger.dev/ despite never really catching on.

Of course, one could say that circular dependencies or configuration issues are project specific, but given enough time and projects you will almost inevitably get those sorts of headaches. So while the theory of DI is nice, you can’t just have the theory without practice.


Inclined to agree. Consider that a singleton dependency is essentially a global, and differs from a traditional global, only in that the reference is kept in a container and supplied magically via a constructor variable. Also consider that constructor calls are now outside the application layer frames of the callstack, in case you want to trace execution.


Dependency injection is not hidden. It's quite the opposite: dependency injection lists explicitly all the dependencies in a well defined place.

Hidden dependencies are: untyped context variable; global "service registry", etc. Those are hidden, the only way to find out which dependencies given module has is to carefully read its code and code of all called functions.


Because you’re passing functions to call.


??? What functions?

To me it‘s rather anti-functional. Normally, when you instantiate a class, the resulting object’s behavior only depends on the constructor arguments you pass it (= the behavior is purely a function of the arguments). With dependency injection, the object’s behavior may depend on some hidden configuration, and not even inspecting the class’ source code will be able to tell you the source of that bevavior, because there’s only an @Inject annotation without any further information.

Conversely, when you modify the configuration of which implementation gets injected for which interface type, you potentially modify the behavior of many places in the code (including, potentially, the behavior of dependencies your project may have), without having passed that code any arguments to that effect. A function executing that code suddenly behaves differently, without any indication of that difference at the call site, or traceable from the call site. That’s the opposite of the functional paradigm.


> because there’s only an @Inject annotation without any further information

It sounds like you have a gripe with a particular DI framework and not the idea of Dependency Injection. Because

> Normally, when you instantiate a class, the resulting object’s behavior only depends on the constructor arguments you pass it (= the behavior is purely a function of the arguments)

With Dependency Injection this is generally still true, even more so than normal because you're making the constructor's dependencies explicit in the arguments. If you have a class CriticalErrorLogger(), you can't directly tell where it logs to, is it using a flat file or stdout or a network logger? If you instead have a class CriticalErrorLogger(logger *io.writer), then when you create it you know exactly what it's using to log because you had to instantiate it and pass it in.

Or like Kortilla said, instead of passing in a class or struct you can pass in a function, so using the same example, something like CriticalErrorLogger(fn write)


I don't quite understand your example, but I don't think the particulars make much of a difference. We can go with the most general description: With dependency injection, you define points in your code where dependencies are injected. The injection point is usually a variable (this includes the case of constructor parameters), whose value (the dependency) will be set by the dependency injection framework. The behavior of the code that reads the variable and hence the injected value will then depend on the specific value that was injected.

My issue with that is this: From the point of view of the code accessing the injected value (and from the point of view of that code's callers), the value appears like out of thin air. There is no way to trace back from that code where the value came from. Similarly, when defining which value will be injected, it can be difficult to trace all the places where it will be injected.

In addition, there are often lifetime issues involved, when the injected value is itself a stateful object, or may indirectly depend on mutable, cached, or lazy-initialized, possibly external state. The time when the value's internal state is initialized or modified, or whether or not it is shared between separate injection points, is something that can't be deduced from the source code containing the injection points, but is often relevant for behavior, error handling, and general reasoning about the code.

All of this makes it more difficult to reason about the injected values, and about the code whose behavior will depend on those values, from looking at the source code.


> whose value (the dependency) will be set by the dependency injection framework

I agree with your definition except for this part, you don't need any framework to do dependency injection. It's simply the idea that instead of having an abstract base class CriticalErrorLogger, with the concrete implementations of StdOutCriticalErrorLogger, FileCriticalErrorLogger, AwsCloudwatchCriticalErrorLogger which bake their dependency into the class design; you instead have a concrete class CriticalErrorLogger(dep *dependency) and create dependency objects externally that implement identical interfaces in different ways. You do text formatting, generating a traceback, etc, and then call dep.write(myFormattedLogString), and the dependency handles whatever that means.

I agree with you that most DI frameworks are too clever and hide too much, and some forms of DI like setter injection and reflection based injection are instant spaghetti code generators. But things like Constructor Injection or Method Injection are so simple they often feel obvious and not like Dependency Injection even though they are. I love DI, but I hate DI frameworks; I've never seen a benefit except for retrofitting legacy code with DI.

And yeah it does add the issue or lifetime management. That's an easy place to F things up in your code using DI and requires careful thought in some circumstances. I can't argue against that.

But DI doesn't need frameworks or magic methods or attributes to work. And there's a lot of situations where DI reduces code duplication, makes refactoring and testing easier, and actually makes code feel less magical than using internal dependencies.

The basic principle is much simpler than most DI frameworks make it seem. Instead of initializing a dependency internally, receive the dependency in some way. It can be through overly abstracted layers or magic methods, but it can also be as simple as adding an argument to the constructor or a given method that takes a reference to the dependency and uses that.

edit: made some examples less ambiguous


The pattern you are describing is what I know as the Strategy pattern [0]. See the example there with the Car class that takes a BrakeBehavior as a constructor parameter [1]. I have no issue with that and use it regularly. The Strategy pattern precedes the notion of dependency injection by around ten years.

The term Dependency Injection was coined by Martin Fowler with this article: https://martinfowler.com/articles/injection.html. See how it presents the examples in terms of wiring up components from a configuration, and how it concludes with stressing the importance of "the principle of separating service configuration from the use of services within an application". The article also presents constructor injection as only one of several forms of dependency injection.

That is how everyone understood dependency injection when it became popular 10-20 years ago: A way to customize behavior at the top application/deployment level by configuration, without having to pass arguments around throughout half the code base to the final object that uses them.

Apparently there has been a divergence of how the term is being understood.

[0] https://en.wikipedia.org/wiki/Strategy_pattern

[1] The fact that Car is abstract in the example is immaterial to the pattern, and a bit unfortunate in the Wikipedia article, from a didactic point of view.


They're not really exclusive ideas. The Constructor Injection section in Fowler's article is exactly the same as the Strategy pattern. But no one talks about the Strategy pattern anymore, it's all wrapped into the idea of DI and that's what caught on.


I'm curious, which language/dev communities did you pick this up from? Because I don't think it's universal, certainly not in the Java world.

DI in Java is almost completely disconnected from what the Strategy pattern is, so it doesn't make sense to use one to refer to the other there.


It was interesting reading this exchange. I have a similar understanding of DI to you. I have never even heard of a DI framework and I have trouble picturing what it would look like. It was interesting to watch you two converge on where the disconnect was.


Usually when people refer to "DI Frameworks" they're referring to Inversion of Control (IoC) containers.


> dependency injection is a programming technique in which an object or function receives other objects or functions that it requires, as opposed to creating them internally


How is the configuration hidden? Presumably you configured the DI container.


It starts off feeling like a superpower allowing to to change a system's behaviour without changing its code directly. It quickly devolves into a maintenance nightmare though every time I've encountered it.

I'm talking more specifically about Aspect Oriented Programming though and DI containers in OOP, which seemed pretty clever in theory, but have a lot of issues in reality.

I take no issues with currying in functional programming.


In terms of aspects I try to keep it limited to already existing framework touch points for things like logging, authentication and configuration loading. I find that writing middleware that you control with declarative attributes can be good for those use cases.

There are other good uses of it but it absolutely can get out of control, especially if implemented by someone whose just discovered it and wants to use it for everything.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: