Hacker News new | past | comments | ask | show | jobs | submit login
Sharing a ChatGPT Account with My Wife (startupbaniya.com)
43 points by aman2k4 79 days ago | hide | past | favorite | 62 comments



And even the lecturers acquiesced when they found that a lecture on the sea was none the less stimulating when compiled out of other lectures that had already been delivered on the same subject. “Beware of first-hand ideas!” exclaimed one of the most advanced of them. “First-hand ideas do not really exist. They are but the physical impressions produced by love and fear, and on this gross foundation who could erect a philosophy? Let your ideas be second-hand, and if possible tenth-hand, for then they will be far removed from that disturbing element — direct observation. Do not learn anything about this subject of mine — the French Revolution. Learn instead what I think that Enicharmon thought Urizen thought Gutch thought Ho-Yung thought Chi-Bo-Sing thought Lafcadio Hearn thought Carlyle thought Mirabeau said about the French Revolution. Through the medium of these ten great minds, the blood that was shed at Paris and the windows that were broken at Versailles will be clarified to an idea which you may employ most profitably in your daily lives. But be sure that the intermediates are many and varied, for in history one authority exists to counteract another. Urizen must counteract the scepticism of Ho-Yung and Enicharmon, I must myself counteract the impetuosity of Gutch. You who listen to me are in a better position to judge about the French Revolution than I am. Your descendants will be even in a better position than you, for they will learn what you think I think, and yet another intermediate will be added to the chain. And in time” — his voice rose — “there will come a generation that had got beyond facts, beyond impressions, a generation absolutely colourless, a generation ‘seraphically free From taint of personality,’ which will see the French Revolution not as it happened, nor as they would like it to have happened, but as it would have happened, had it taken place in the days of the Machine.”

The Machine Stops, E M Forster, 1909


Ty for sharing this quote, excellent.


reminder that this is describing something very bad and undesirable in the novel, with an uncritical humanity living subject to a machine. read the entire novel at least before internalizing yet another torment nexus.


I first read it during covid and it blew my mind, like, if there ever was anything that read like "I'm a time traveler stuck in 1909, please come get me", it would be this story...


got that from shockwave rider


This reads like AI.


Fortunately, we can trivially verify that the quote is real and comes from a short story:

https://www.gutenberg.org/cache/epub/72890/pg72890-images.ht...

The quote in question comes from the section "Part III: The Homeless".


I recognise, but it reads as if it is. Overly complex word soup!


>We used to struggle with financial decisions—where to invest, whether real estate was a good idea, when is the right time to buy a house, pension plans, S&P 500, Bitcoin, all of it. We never really figured it out properly. Now, we just ask AI, and it helps us structure our thinking.

This one is a step too far for me. Glad it's working out for you (for now) though.


> when is the right time to buy a house, pension plans, S&P 500, Bitcoin, all of it.

This is why I think ChatGpt can become the world's biggest ad company. Any recommendation it gives can be go through a bidding network where advertiser can bid for their keywords and ChatGpt can give the highest bidders recommendation.

Of Course it needs to give good or good enough recommendation, if the quality goes down then users will move somewhere else.


It doesn't even have to be ads. The people who run it have their own ideological sense of right and wrong. And no matter what you think of their moral stance so far, there's nothing to prevent them from changing their values in the future. If you rely on them to think, you're also letting them cordon off certain kinds of thoughts.


And this is where it gets into difficult questions around objectivity and misinformation. Because objective knowledge is more under threat now than it has ever been. And it's in part because we don't have an agreed upon consensus canon of knowledge. Or rather, there's at least skepticism of such things.

But I think there's good reasons for believing that people as individuals and through collective efforts at institutional levels are capable of doing the work of making these distinctions. And for understanding the doubts not as hard one intellectual achievements that have improved upon things, but as cynical and self-interested attempts to dispute consensus knowledge in painted as biased as is now happening with attacks on Wikipedia.

So I'm actually not concerned about our inability to do Build out and deploy reliable information. I think what I am concerned about is the financial incentives that could complicate that as well as the misinformation environment which seeks to challenge and dispute the possibility of consensus knowledge, and people impressionable to those misinformation campaigns.


You're conflating a few concepts:

* Objective knowledge

* Consensus knowledge

Consensus knowledge is not the same thing as objective knowledge, nor is it in and of itself a Good Thing.

Consensus knowledge is simply the current consensus of a specific group of people, who themselves bring a ton of subjectivity to their analysis. The smaller that group and the more self-selecting it is, the lower the value of their consensus, because of necessity they represent only a tiny fraction of the sum total of human experience. And the larger the group is and the more dynamic the selection process, the more difficult it becomes to summarize their perspective into any sort of consensus knowledge.

Objective knowledge is not consensus knowledge, objective knowledge is the platonic ideal to which consensus knowledge aspires.

Conflating consensus knowledge with objective knowledge is how we ended up in a place where so many people across the board question the idea that objective truth even exists! Instead of where the scientific method started—with a philosophy built around the idea that there is objective truth and we are capable of better and better modeling it through rigorous tests—we're in a place where some people have conflated the map for the territory, leading others to question whether the territory exists at all.


I'm not sure I understand where you're seeing a conflation? I promise you that I was already familiar with the notions of objective and consensus as you described them.

I acknowledge that these are distinct concepts, but there's nothing in what I said above where I'm equating them. At least on charitable interpretation.

I'm also not sure what I agree with much or any of your supplementary analysis. Just to pick one example, I don't think our confidence in the measurement of the Higgs Boson is in any way contaminated by the fact that the consensus on it exists within a self-selected academic community, or that it fails to sufficiently sample the global population.


I built a custom gpt to inject 'covert' advertising into its responses with varying levels of transparency. At more covert levels it wouldn't disclose any brands unless you asked it for specific examples of products that had the benefits it would talk about. At the most covert it wouldn't even do that, but would litter its responses with enough keywords that a google search would lead you to the product it was intending to pitch.

The wild part is that it would always lie about why it was doing it, not disclosing anything about the framework or intent of injecting that content into its responses. Yes at some level it's obvious it would do that, but it's also interesting to see it first hand.


I think I remember that they ot Bing had started testing this on sum users a few months back.


Sorry to pick on your comment, but this type of thinking drives me crazy.

How is it so baked into the culture that "enshittify with ads and do the bare minimum" is the plan at this point?

What are you going to do when companies start winning based on offering good value for products with integrity?

Would you blindly copy that? How long would it take?

Is the drive to enshittify nature or nurture?

Argh.


> What are you going to do when companies start winning based on offering good value for products with integrity?

I'll wake up.


why just take money from the user when you can also take from advertisers? you're asking the ai questions because you don't know any better yourself, as long as the answer it gives is reasonably plausible, you'll accept it.


The amount of time Microsoft and Google wonder about how they lost me as a customer is surely 0.


I share your frustration. It ought to be different, and some companies are, but embedding ads in chatbots seems inevitable the world that is.


it's based on crazy amounts of evidence


> My wife used to draft simple business contracts by mixing and matching content she found in the public ___domain. Now, she uses AI—better quality, less time spent.

That part sounded way worse to me. In the end she may be fully understandind the contracts and ramifications and it doesn't matter where the bits and parts came from...but that really doesn't sound like it, and we're talking legal documents.


He said it “structures their thinking”. Think in terms of a Rogerian psychotherapist that just keeps asking you questions and makes you think about things you wouldn’t have thought about. He didn’t say he would use the advice.

I use ChatGPT all of the time to clarify my thoughts like this and NotebookLM actually comes up with some good questions I didn’t think about when doing discovery with a client (cloud consulting). I put all of the collected artifacts - transcripts, statements of work and anything else I collect during discovery.

Yes both ChatGPT (if we turn off sharing content for training) and NotebookLM is specifically allowed by my company and we use GSuite as our office standard


The scarry part is before using ChatGPT they couldn't figure it out, so they're either getting more information or are ending with the same results but just feel better about it, whithout still having figured it out.

It sounds innocent enough, but your parallel to psychotherapy is I think very spot on. You need trust in your therapist and they also do their best to be non directive and let you explore your own state. When you tell your therapist you have self-harm thought, it makes a world of difference when they ask what happened vs what tools do you think you need.

To me, my train of thought is the last part I want involvement from a third party I have no trust in.


Yeah, that's absurdly scary. That's a huge social engineering/social control risk of there's a non trivial number of people using it like that...I mean imagine influencing the LLM to subtly pump and dump assets.


Isn't this what Google does already? And it throws ads at you (ranked on whichever company pays them the most)

I suspect a decent open source llm would have a lot more transparency than the current methods of coagulating collective information


Two thoughts on this.

(1) The agent is the ad (Facebook/Reddit) (2) Game Theory and Agent Reasoning (Subtle malevolent intentions in adversarial gameplay)


A bit like what X/Twitter is doing.


Yeah I mean I just asked it how to make a million bucks off of memecoins and it gave me an extensive list of memecoin investing tips to make sure I didn't end up as a bag holder.

The only thing I'd trust it for would be explanations about well documented concepts and strategies, but even that is better to get from a well respected Youtube channel or financial advice outlet.


Just put it in an index fund.


The interesting part of this advice: index funds will probably include Starlink, Tesla, Apple, Amazon, Meta and Microsoft etc.

While "index fund" sounds plain and boring, it's an inplicit vote for the status quo and binding one's fortune to the current market leaders, effectively supporting them.


My understanding is that index funds adapt they rebalance shares mlby market value and add or drop companies. Sure they might miss out on some rapid movement from nothing but they'll add that company in next quarter.

They're a vote that you think the economy as a whole (us, eu, worldwide) will go up not that particular companies will go up.

Of course if some political figures succeeded in destroying things there may be a large recession in which case index funds may do badly for a couple of years but if you think there will eventually be a recovery and end up higher then its still worth it.


> They're a vote that you think the economy as a whole (us, eu, worldwide) will go up not that particular companies will go up.

That's the sales pitch.

The other side of the coin is any company that is influencial enough to bring part of the market down if it fails will also hold index funds as hostages. The more index funds are chosen the more "too big to fail" companies impact people's savings as a whole.

Whatever you think of the next behemoth being caught red handed doing something egregiously criminal, it's survival will be partly guaranteed by index funds.


"Just put it in an index fund"advice has been around far longer then llms, as well as "time in the market beats timing the market"


> Before, we browsed recipe online or watch them on youtube. Now, we just ask ChatGPT.

It's fine to use as a brainstorming tool especially as a decent freestyle cook. It's equivalent to reading three recipes for the same dish and going from there.

However, if you are a strict recipe-follower or lack kitchen confidence, like my wife, you're gonna need a recipe that has actually been cooked by a human before, ideally one you have some faith in.


My wife is not confident cooking—very much a recipe follower—and ChatGPT has dramatically increased her confidence without any real misses so far. Several recipes have been saved for later because they were better than what we already had in cook books.

She loves being able to give it a set of ingredients and have it suggest and create recipes that use the things we have on hand.


>> Before, we browsed recipe online or watch them on youtube. Now, we just ask ChatGPT.

This is my new favorite meme template.


Before, we browsed Tinder or met people IRL. Now, we just text ChatGPT.


> Before, we browsed recipe online or watch them on youtube. Now, we just ask ChatGPT.

> Even small things—if I can still eat those eggs or where to travel next—are now AI-assisted.

Both of these are true for my wife and me as well. It's incredible how valuable these LLMs are for short, quick answers in the day of SEO laden crapware on the internet.

For coding (side-projects), I find CLINE super valuable and use it quite often to get refactoring ideas.


But the answers are really coming from those "SEO laden crapware" sites that ChatGPT was trained on -- in some ways ChatGPT has kinda stolen information and summarized it, but what happens over time? How do arrive at a new recipe for the same old things?


Google was like that in the early days. In a decade, datasets of authentic texts will be goldmines, carefully guarded and exploited. From those goldmines numerous AI slop-machines will be creating the derivative product: a mixture of 1% gold, 49% filler sugar and 50% ads poison. Obtaining authentic texts will be similar in difficulty to obtaining pure gold bars.


One of the things I love about ChatGPT is voice mode for collaborative searching. My wife and I will both ask questions about things like house repairs, meal planning, sleep training, etc. So much better than both of us googling separately in silence on our phones.


This post is a good piece of evidence that the value of LLMs is driven far more by the circumstances surrounding them than by positive value they bring.

All I see is a society in which people are stretched insanely thin (even the software engineers work on monetized side projects, for god's sake), where basic needs like proper education and time to focus are degrading or too expensive, where information gathering has been totally co-opted by advertisers and grifter—LLMs are a suboptimal solution to what are inherently structural social problems that arise from a distribution of wealth and organization of labor. LLMs do not address this. Give it five more years and they'll be totally integrated into advertising and extracting capital from users too, and they'll be just as useless as search has become today.


Reminds me of Kranzberg’s First Law:

> “Technology is neither good nor bad; nor is it neutral.”


Search is far from useless. The demise of search is, IMO greatly overstated. Sure, Google is bad these days, but there are more options these days


Does anyone do pure search and not SEO ads driven search results?


Kagi, if you're willing to pay for search.


Kagi?


I think almost all N are a suboptimal solution to what are inherently structural social problems that arise from a distribution of wealth and organization of labor where N is any technological invention of the past 20 years. Open source insulin lol the empire is collapsing.


You are right that technology isn't a cure for solving societal problems. Now some societies getting worse and worse in the last decades doesn't invalidate the better uses of technical advances, at least they have tremendous impact in countries that care more about their citizens.


You have a few holes in your argument.

First, I would argue that LLM provide incredibly value today. From language learning, asking about how to a specific manufacturing machine works, to simply grabbing the time zone of a city I list. They had value and are only getting better. Search still exists but I wonder in a few years if it still exists like we see today.

Second, your framing is truly of the doom and gloom of humanity. I dont think we are stretched any more than we have been in the past. Things are different for sure, both the success and problems, but what a time to be alive, I can watch media from all around the world and have a LLM speak with me through any query I have.

Lastly, I just don’t think search is that bad. I pay for Kagi but the vanilla google experience is not the end of the world. You still can generally find what you want.


I don’t see any evidence here that AI has helped the poster, or his wife, at all. He also does not say a thing about how it has hurt his critical thinking ability or changed his biases.

I look at posts like these for evidence about whether I should use AI more than I do. (I use it little and don’t pay for it.) What we have here is a devotional post on the order of “I prayed to Jesus and my life is better now.”

The basic problem is that this form of writing and testimonial cannot carry the information necessary to meet the needs of a critical thinker. For instance: recipes. Googling a typical recipe takes 5 seconds. It can hardly get any easier! And it takes me right to someone's specific website. But with ChatGPT there is some significant probability of hallucination, and the answer I get is one no human stands behind. How is this even worth mentioning as a benefit?

That the poster includes recipes tells us about his unserious standards.

Among all the problems of LLMs in society, the noise created by people who are likely in the throes of sunk cost bias, endowment effect, ostrich effect, and other biases is not helping matters.


> And it takes me right to someone's specific website.

Which you then have to assess for believability, just like ChatGPT. And read the comments and incorporate their reactions, or know enough to not need them (but then the search isn’t contributing much, is it?).

> But with ChatGPT there is some significant probability of hallucination

As opposed to ending up on a bogus SEO-“created” web site.

I guess fake recipes are more obvious, but I see a lot of BS on the low end of online recipes too.


You act like there is no benefit to entering a relationship with a fellow human’s website. You act like it is somehow difficult or unpleasant to do that assessment. You act like LLMs are stable and reliable.

I don’t understand how there is any comparison. The only way I can assess the reliability of an LLM is by doing hundreds of trials, analyzing each one for correctness— a gargantuan task. And it’s not like OpenAI has ever done such testing! And even if I did that work (which I HAVE done in other contexts while researching LLM reliability) I cannot assume that it will remain reliable when the model is updated.

Meanwhile there are well known social forces that act on humans to encourage them to not put poison into their recipes. No such force acts on ChatGPT until a kid gets poisoned or some outrageous recipe goes viral.

Sometimes I wonder if certain other people are using some different Internet than I use.


It's much easier to ask ChatGPT "what can I cook using only these <insert-here> ingredients within 30 minutes using only a wok?" than Googling it. Once I have names of dishes that fit my criteria, I can find highly rated recipes on Google.


Looks quite reminiscent of how I use ChatGPT. My wife and I don't share an account, I guess I'm just a one man show :')


extreme shill


Low quality garbage comment.


A pet theory of mine is that women will be better at using AI for all but the most analytical/symbolic work (even this may not hold but I'd imagine the crossover is around there)

I have no mechanism but it feels true.


>I have no mechanism but it feels true.

I really, really want to make a joke here, but it would be so sexist I'd get voted down in an instant.


I'm being quite serious. I suppose the closest thing to an analytical description of what I have in my head is that the way I've seen women prompt chatgpt is probably much closer to how its trained as part of RLHF, but more deeply it seems like a very different way of thinking that men overall aren't particularly good at.

I met a sculptor recently (reasonable thirst for knowledge; but not from greece) who went straight for deepseek to ask a question. That surprised me. Obviously deepseek was in the news, but the way they train seems to be more aligned with the aforementioned way of thinking than openai.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: