> when is the right time to buy a house, pension plans, S&P 500, Bitcoin, all of it.
This is why I think ChatGpt can become the world's biggest ad company. Any recommendation it gives can be go through a bidding network where advertiser can bid for their keywords and ChatGpt can give the highest bidders recommendation.
Of Course it needs to give good or good enough recommendation, if the quality goes down then users will move somewhere else.
It doesn't even have to be ads. The people who run it have their own ideological sense of right and wrong. And no matter what you think of their moral stance so far, there's nothing to prevent them from changing their values in the future. If you rely on them to think, you're also letting them cordon off certain kinds of thoughts.
And this is where it gets into difficult questions around objectivity and misinformation. Because objective knowledge is more under threat now than it has ever been. And it's in part because we don't have an agreed upon consensus canon of knowledge. Or rather, there's at least skepticism of such things.
But I think there's good reasons for believing that people as individuals and through collective efforts at institutional levels are capable of doing the work of making these distinctions. And for understanding the doubts not as hard one intellectual achievements that have improved upon things, but as cynical and self-interested attempts to dispute consensus knowledge in painted as biased as is now happening with attacks on Wikipedia.
So I'm actually not concerned about our inability to do Build out and deploy reliable information. I think what I am concerned about is the financial incentives that could complicate that as well as the misinformation environment which seeks to challenge and dispute the possibility of consensus knowledge, and people impressionable to those misinformation campaigns.
Consensus knowledge is not the same thing as objective knowledge, nor is it in and of itself a Good Thing.
Consensus knowledge is simply the current consensus of a specific group of people, who themselves bring a ton of subjectivity to their analysis. The smaller that group and the more self-selecting it is, the lower the value of their consensus, because of necessity they represent only a tiny fraction of the sum total of human experience. And the larger the group is and the more dynamic the selection process, the more difficult it becomes to summarize their perspective into any sort of consensus knowledge.
Objective knowledge is not consensus knowledge, objective knowledge is the platonic ideal to which consensus knowledge aspires.
Conflating consensus knowledge with objective knowledge is how we ended up in a place where so many people across the board question the idea that objective truth even exists! Instead of where the scientific method started—with a philosophy built around the idea that there is objective truth and we are capable of better and better modeling it through rigorous tests—we're in a place where some people have conflated the map for the territory, leading others to question whether the territory exists at all.
I'm not sure I understand where you're seeing a conflation? I promise you that I was already familiar with the notions of objective and consensus as you described them.
I acknowledge that these are distinct concepts, but there's nothing in what I said above where I'm equating them. At least on charitable interpretation.
I'm also not sure what I agree with much or any of your supplementary analysis. Just to pick one example, I don't think our confidence in the measurement of the Higgs Boson is in any way contaminated by the fact that the consensus on it exists within a self-selected academic community, or that it fails to sufficiently sample the global population.
I built a custom gpt to inject 'covert' advertising into its responses with varying levels of transparency. At more covert levels it wouldn't disclose any brands unless you asked it for specific examples of products that had the benefits it would talk about. At the most covert it wouldn't even do that, but would litter its responses with enough keywords that a google search would lead you to the product it was intending to pitch.
The wild part is that it would always lie about why it was doing it, not disclosing anything about the framework or intent of injecting that content into its responses. Yes at some level it's obvious it would do that, but it's also interesting to see it first hand.
why just take money from the user when you can also take from advertisers? you're asking the ai questions because you don't know any better yourself, as long as the answer it gives is reasonably plausible, you'll accept it.
This is why I think ChatGpt can become the world's biggest ad company. Any recommendation it gives can be go through a bidding network where advertiser can bid for their keywords and ChatGpt can give the highest bidders recommendation.
Of Course it needs to give good or good enough recommendation, if the quality goes down then users will move somewhere else.