> Selling by personal connection and a motorcycle ride is really a form of cronyism or corruption.
Hardly. It's just "less than ideal."
> The end consumers and shareholders get shafted.
You're completely assuming that 100% a good deal CANNOT possibly be made on the back of a motorcycle? There's no reason to believe a suboptimal strategy can't arrive at the optimal solution.
> This kind of nonsense is especially pronounced and explicit in less well off societies and lowers the general standard of living.
This is the way business has been done in the first world for 100 years. You should calibrate your assumptions to the actual data.
If the deal is made or influenced BECAUSE they both like motorcycles and shared a personal bonding moment (rather than objective business related factors) it is a form of cronyism or corruption.
`This is the way business has been done in the first world for 100 years.`. Yes, I am aware cronyism and low level corruption exist in the first world. In other places you probably just give the person an expensive gift or briefcase full of money and save time. But it's not optimal.
Point being, this story sounds like an argument in favor of AI to me.
You start your statement with a conditional. So, in your estimation, how often is it true?
> But it's not optimal.
And your assumption is that it would be cheap to optimize literally every decision we make regardless of it's total impact on the actual outcomes? It's almost never worth the actual costs for the given gains.
> this story sounds like an argument in favor of AI to me.
It's sounds like an argument for you to achieve more real world experience to me.
How much "real world experience" do you think I might have?
I'm not sure what your point is exactly.
I think my point is pretty clear but here it is again just in case. "If high dollar business deals are being made because of personal likes or feelings of the CEO towards a salesperson, AI may just maybe be more optimal".
> How much "real world experience" do you think I might have?
10 years or so.
> I'm not sure what your point is exactly.
I disagree with you and am attempting to explain why.
> "If high dollar business deals are being made because of personal likes or feelings of the CEO towards a salesperson, AI may just maybe be more optimal".
If. What percentage do you think do? How big of a problem do you think this is /actually/?
Personal likes and feelings may play in but are you suggesting that's the only metric being used in these circumstances? Like if a company has a product that just won't work and will cause tons of problems you're saying a motorcycle ride could smooth that over?
What makes you think AI can optimize this problem that might not even actually exist?
It's amazing to me that people think AGI is going to come about while simultaneously doing exactly what you tell it to do. I mean I'm literally trying to picture the world you say you would prefer and I just can't. Worse even if I blur the definitions I actually can't see it being more "efficient."
Hardly. It's just "less than ideal."
> The end consumers and shareholders get shafted.
You're completely assuming that 100% a good deal CANNOT possibly be made on the back of a motorcycle? There's no reason to believe a suboptimal strategy can't arrive at the optimal solution.
> This kind of nonsense is especially pronounced and explicit in less well off societies and lowers the general standard of living.
This is the way business has been done in the first world for 100 years. You should calibrate your assumptions to the actual data.