Doesn't have to be because of affordance but rather it's more efficient and cheaper to scale vertically first, both in monetary costs and in time/maintenance costs.
On hardware, but not on a cloud setup? We run several hundred big ES nodes on AWS, and I believe we stick to the heap sizing guidelines (though I’ve long wondered if fewer instances with giant heaps might actually work ok, too)
Cloud is trickier to price than real hardware. On real hardware, filling the ram slots is clearly cheaper than buying a second machine, if ram is the only issue. If you need to replace with higher density ram, sometimes it's more cost effective to buy a second machine. Adding more processor sockets to get more ram slots is also sometimes more, sometimes less cost effective than adding more machines. Often, you might need more processing to go with the ram, which can change the balance.
In cloud, with defined instance types, usually more ram comes with more everything else, and from pricing listed at https://www.awsprices.com/ in US East, it looks like within an instance type, $ / ram is usually consistent. The least expensive (per unit ram) class of instances is x1/x1e which are 122 Gb to 3904, so that does lean towards bigger instances being cost effective.
Exceptions I saw are c1.xlarge is less expensive than c1.medium, c4.xlarge is less than other c4 types and c4 is more expensive than others, m1.medium < m1.large == m1.xlarge < m1.small, m3.medium is more expensive than other m3, p2.16xlarge is more expensive than other p2, t2.small is less expensive than other t2. Many of these differences are a tenth of a penny per hour though.
Doesn't have to be because of affordance but rather it's more efficient and cheaper to scale vertically first, both in monetary costs and in time/maintenance costs.