Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Do you feel scummy making AI products?
49 points by radeeyate on March 27, 2024 | hide | past | favorite | 46 comments
I've made a couple AI websites/CLIs recently using Google Gemini (because it's free).

I've felt a bit weird making them because I feel I'm profiting off of "the AI hype train" rather than genuine interest in a product. While I'm not making any money off my projects, my analytics tell me I'm getting 10-20x more traffic on my AI-related projects than my other ones.

I'm just curious - does anyone feel similar to me?




I have a job now, but from November 2023 until Feb 2024 I was unemployed.

I think ChatGPT and Stable Diffusion are pretty cool but, like blockchain before it, there's a lot of low-effort projects that are riding on investor-hype-money. I became intimately familiar with it, because I got a lot of recruiters reaching out to me with AI positions.

When I would take an interview, the company was almost invariably "add an LLM to something where an LLM doesn't really fit", and the person in charge was someone with basically no industry experience who had made a rough prototype and had managed to convince some VC folks that what they're doing is legendary. Obviously most people doing stuff with AI are perfectly fine humans, and I'm not suggesting otherwise, but I do think that the "AI entrepreneur" is attracting the "get rich quick" type of people.

STORY TIME:

I had one case where I took an interview, and they were asking me to whiteboard a notification system. That was totally fine, so I started drawing up diagrams, but they would stop me occasionally to ask for proper nouns [1], so that they could write them down. Initially I didn't think much of it, but after the interview was over I got the vibe that they used me for unpaid consulting because the questions were more specific . They did make me an offer, but it was way too low so I turned it down.

A month later I get an email from their CTO, asking for some advice on distributed task scheduling, e.g. which libraries would I use, how do I make sure things are done in the right order. I give a very simplified response with like three links to open source documentation. He responds back asking for more clarification, and I responded with "any further discussion on the matter will require me to charge my hourly consulting rate. Please let me know if you would like to continue this discussion so I can prepare an invoice."

He stopped responding after that.

[1] Which message queue was that again? Which binary serialization library? Where do you download those?


The AI "hype-train" exists because people really want to find a way to make AI useful. It was a big technological advancement that is now in search of a purpose. The same thing happened with blockchain, not a ton came out of it except Bitcoin and clones, but AI is much easier to imagine a use for.

The market is throwing attention and money at these AI products because one might end up being truly useful as a product. You should be rewarded for your efforts in exploring the product space, trying things out, creating real value through application/product engineering.

I'll also say that you aren't ripping anyone off. Google / OpenAI want you to find product/market fit so they can rip you off right back. They are essentially outsourcing that work. Get your bag while you can.


I'm not an AI enthusiast by any means, but can still get behind this all.

If OP can productize and suddenly make an extra $1k/mo, more power to them.

When you boil everything down, most code can technically be recreated by any skilled-enough individual(s). Sometimes, an Enterprise License for some software saves a company enough money relative to the cost of building in-house to where they just want to sign a check and be given an API key, in order to focus on things they're naturally better at.

That's why I have a job. People want coded stuff, without hiring the coding people. My company allows another company to sign a dotted line, and get access to developer braintime.


AI is useful as today. I, for example, I'm finishing a project where the company I work for wanted to extract some data from bills and documentation. The thing is, in my later company I did a project with OCR and deep learning and it was quite useful. But this time, since we don't really need to tabulate the data, just extract some info here and there, what we did is to attach an LLM after the OCR extraction with an API. So right now we are getting a big improvement and flexible queries of information we are required to extract from bills and documentation, with an easy solution and not that heavy or costly.

And I'm taking into notes that usually what we are talking about is llms as AI. But in the other hand, my last project for this company was an energy demand and allocation time series with auto-training in case it detects a huge chunk of error in the last statistics stracted from the model compared to the reality. This project was complex but it impacted the company positively with millions of euros in earnings.

So, yeah. Statistics and programming works. Usually when we talk we do refere llms as AI, but there are other situations where just "classic" data science is useful and more than enough.


Very alarming to hear you're using LLMs to extract and report billing data. LLMs are frequently wrong, you will be getting incorrect and often totally fabricated data. I hope you're not using this information to bill customers or generate any kind of financial documents.


> I feel I'm profiting off of "the AI hype train"

> While I'm not making any money off my projects

????


The real profit is the friends you make along the way.


I'm still trying to unlock profanity mode on my AI girlfriend.


Have you used "sudo"??


I think one usage of "profit" is "making money" and the other is "benefitting from".

"I feel I'm benefitting from the AI hype train although I'm not making money from my projects."


Sorry about that, bad wording. I meant that I'm benefiting from it.


I mean, something a bit more quantifiable is being able to put stuff like this on your resume. Every company wants "AI experts" right now, and if you have a successful project using AI that can land you interviews.


The feeling you're experiencing is due to working in an industry you don't believe in just for the returns (whatever those returns may be). Making money off of something you don't believe in, rarely feels good.

I've felt the same way working in finance until I learned there are ways to benefit retail investors and not just hedge funds through my work.

I've felt the same way when I was doing freelance server management for minecraft servers, until I started looking at it through the lense of an entertainment-enabler rather than someone leeching off of the very lucrative environment.

Find purpose in the work you're doing, other than money or fame.


You made a website using tool A, and got a little traffic.

You made a new website using tool B, and got 20x the traffic.

So people are finding website B more useful and engaging with it more.

What part do you feel guilty about exactly?


> So people are finding website B more useful and engaging with it more.

That doesn't follow. The OP guessed that it was only getting more traffic due to AI hype, not necessarily that it's more useful.


One can certainly make a thing that one has moral qualms about the popularity of.

"You grew some vegetables and sold $X worth. You refined crack cocaine and sold $1,000x worth. So people are finding crack cocaine more useful and engaging with it more."

Yes, and we've had whole wars over what that does to a society in the large.


If website A is something that I've been working on for over a year, and website B is something that took me 8 hours to make, while website B gets 20x more traffic, it feels a bit weird to me (this is where the idea of making AI products feels scummy. Something that is just a toy and took no real effort, getting more attention than something that has been a huge project.)


I commend you for having a conscience. I guess the real topic is not about AI (because this discussion could have been about NFTs a couple of years ago), but whether one feels any reservations about pursuing projects based purely on their potential appeal rather than out of a genuine interest.


And what I really fail to understand (honestly) is why this is even a question.

It's not confined to HN by any means, but for my entire professional life I've gotten the feeling that programmers, more than any other group, feel guilty about making money.


Well, drug dealers also make money.


No, i make the boring stupid AI stuff that actually works, and has a product market fit, and generally serves to reduce toil without reducing headcount on use cases that concretely benefit the world.

consequently, i'm horribly underpaid relative to the 7- or 8- figure salaries the generative boys are getting right now!


Same. Except reducing headcount is pretty much a focus of ours for 2024...


i work in a fairly real-world industry (manufacturing QA, reliability), so there is by policy and law always a final human QA step. My job is not trying to remove that step, it's trying to remove the drudgery and fatigue of looking at shittons of guaranteed true negatives all day and giving the human operator only the weird stuff to figure out.


I felt that way working on display ads. Always seemed odd that I run an ad blocker while hoping everyone else does not.


I don't think AI is unethical by default. I mean, we've seen a multitude of tech markets that get filled with shovelware. I remember back in the Radio Shack days they would have a rack of shareware-like cheap software for hundreds of different use cases. Or the thousands of indie games on Steam or mobile websites. Or the thousands of iPhone and Android apps that are useless money grabs. None of these invalidate the good software that was/is available as shareware, indie games or mobile apps. The same is true for AI based software.

What I have been arguing is that Microsoft, Google and Amazon are investing billions of dollars on AI. They want to sell this AI as fast as possible to recuperate their investment. This kind of environment won't last forever so you should consider taking advantage as long as you can.


> rather than genuine interest in a product

The overwhelming majority of people going to work every day are doing it to earn a paycheck, not because of any genuine interest in what they are doing. You're in good company.


Hype trains are genuine interest. Short lived or not.

Getting attention isn’t anything to feel bad about.

If you’re lying in your marketing material, feel bad. If not, just build what you want and let the chips fall.


Well our "AI" is an affordable liquid biopsy for lung cancer detection that is based on sound scientific principles, so I would say I feel pretty good about it. If wrapping LLMs is starting to feel gimmicky then maybe try wading into other types of AI or even studying the theory and training models yourself. Just do what gives you fulfillment


I think the question might be a little too general to get good answers back. But it's an excellent question IMHO, and one that I really wish were more in the forefront of people's thinking.

If I were to make a website on some topic where all the content was AI-generated, I would feel scummy whether or not I was profiting financially. It would be adding noise to search results without adding value. It's like setting up wind farms right along the sides of interstates to produce "free power!" from the wind produced by trucks driving by. Except it's not free because it produces a small amount of drag paid for by the gasoline used by the trucks.

If I were to make a website with my own lame thoughts on the same topic, which didn't especially add any new ideas or break any new ground, but it felt significant to me, I wouldn't feel scummy at all. Even if it was just because of some funky CSS layout tricks and not really the content. Contributing a perspective feels fundamentally worthwhile. I think that would be my primary metric, in answer to your question.

Same thing if I were publishing a new package on npm or github or cargo or whatever. If I got the code from an AI or some other existing source and didn't add anything of my own, I would feel pretty scummy. I'd be squatting on some namespace and polluting the results of people looking to solve a problem. If I instead wrote some crappy code that worked the way I wanted it to, or used a novel-to-me technique, then I'd be totally fine with it.

There's a vast gray area in between those, though. It's a judgement call as to whether you've injected "enough" of yourself (your time, effort, care, interest...) to move from scummy to good. It'll vary between people and over time (my beginner project might feel awesome to me and dangerously incompetent to more experienced people.)

(And for the record, making a fully AI-generated website is a great learning experience and a worthwhile endeavor. The question of scumminess only comes up when you publish and promote it. Publishing with a robots.txt to make it be ignored is no problem at all.)

That's my 2¢ anyway.


Bezos actively advocates for riding the wave of trends in his Lex Fridman interview. I think if you can leverage external factors to get traction and act as a tailwind for your work, then why not take advantage of that?


That sounds like a 'end justifies the means' sort of argument; certainly one that explains a lot of Jeff Bezos's business choices, but a rather objectionable stance ethically.

If parent poster considers the general development or use of AI to be in any way harmful, then riding the hype train to fund/promote worthy goals doesn't absolve them of culpability under conventionally-held morality! However, if parent simply considers AI to be morally neutral, no harm is caused by profiting from it, whether for altruistic purposes or selfish ones.


Is Bezos a good example of a moral or even good person?


If you think it's a useful thing you made, you shouldn't feel guilty at all. If you don't think it's useful, why bother doing it?


This is how I felt when working on cryptocurrency projects (the last hype train).

The money were good though!


I've refused to work on AI-related initiatives at previous jobs, and will again if they ever come up. I have deep ethical objections about LLMs in terms of power use, consent for providing training data, fitness and safety of LLM-powered solutions for problems that cannot afford errors, and the potential and already-happening effects of replacing human workers without a suitable safety net for when their incomes disappear.

Crucially, I do not think predictions that all these issues will improve is a good enough justification to keep innovating before they have improved. Harm caused now is not undone just because we fixed the flaws later.

In that sense: I think feeling weird about the hype train is completely normal, but for different reasons. I do not want any complicity in legitimizing LLMs.

Besides ethical concerns, I also think the myriad applications of LLMs are mostly misguided market waste. In that sense, profiting off the hype could be seen as you simply slurping up some of that waste for yourself, and while I don't like that function of the system, I think the system is the issue rather than you trying to exist within it. If you don't share my ethical concerns or aren't objecting to the market's function of trying all ideas and assuming the good ones profit, then you're probably not really doing anything scummy by your own standards.


as long as you don't need AI (in its current form) to maintain/extend/understand your code, i don't see the problem.


Whenever I start feeling scummy I recall the Morlocks and the Eloi from The Time Machine — only in our world people deliberately choose to be Eloi and not Morlocks.

<bondvillain>...and it's not like we literally eat Eloi, anyway. I'm chuffed with a 5% cut, so it's more like being a vampire, a parasite, than a werewolf, a predator. What was that management story about the chicken and the pig at breakfast? The sons of Mary* are only involved with me; the Eloi were committed to the Morlocks, after all.</bondvillain>

* https://www.kiplingsociety.co.uk/poem/poems_martha.htm


With many of these solutions, there's already a there there. Middle market and larger companies are seeing dramatic efficiency improvements year 1. Making this tech so accessible, while unfortunately inviting crypto bros to add "prompt engineer" title to their linkedin, also enables POCs that deliver value, even if wildly inefficient. They're a basis.

Wrappers around ChatGPT have already provided more value generation than any prod blockchain implementation I've ever heard of that couldn't otherwise be implemented as a distributed db.


Something useful is going to come from AI. A lot of very successful technologies had cycles of hype and false starts.

Don't feel scummy; instead realize that you're learning how to better work with AI and getting a better feel for what's really possible.

Edit: Do you feel scummy about planting tulips, using radio, or the internet? All three of those had bubbles / hype too.


I think the scummy feelings might come from knowing that the tools you are using were trained on the work of other people without getting permission, offering compensation, or even providing acknowledgement.


> the tools you are using were trained on the work of other people without getting permission, offering compensation, or even providing acknowledgement

I find that viewpoint rather myopic. Once information is generally available, it's unreasonable to expect that you can retain control of how that information is used.


Much of my stuff is freely available online under a Creative Commons "Attribution-NonCommercial-ShareAlike" license; the "freely available online" part doesn't give you either the legal or moral right to to ignore the "attribution", "noncommercial" or "share alike" parts.


At the same time, it's rather myopic to disrupt what reciprocal incentives there might be in an information sharing environment and somehow expect that the sharing will continue indefinitely.


AI doesn't feel scummy since you're not trying to cheat people — it just feels intellectually unfulfilling because unless you're at the tip of the spear then you're just remixing techniques as they become known to the field, and the churn might be the fastest in the whole tech industry.


I wish more people posted questions like this. Questioning yourself and your path (and importantly, the ethics of it all) is a vital survival skill to avoid losing yourself entirely to the capitalism game, where you start to listen more to your analytics software than your sensibilities and moral compass.

I don't know if your AI products are scummy or not, but the fact that you're asking this question makes me less worried about you than a lot of other folks out there furiously rushing to get their AI pre-crime hot dog detectors rushed out the door so they can get on the NASDAQ.


As soon as I take my focus away from here, your entire industry will collapse. Because you are using my personal energy and my personal time, so I will do it. You have to pay for everything in real life.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: