My experience of rural areas is that few are actual farmers. After all, farming has largely consolidated and become automated. Most country people just don't want a city lifestyle. They might have some of the accoutrements of a farmer and have added lifestyle (enjoyable/fulfilling) overhead and significant attitude (independence & sometimes xenophobia) for themselves, but it's a lifestyle choice. Therefore most don't have a welder (though they probably know someone who has one).
That's rural but not "bumfuck nowhere". Within ~100 miles of a city there are a lot of rural non-farmers, but only farmers will live 200 miles away from the closest city.
Very few people live over 200 miles from a city in the lower-48 states of the US.
To give you an idea: It's 413 miles between Colorado Springs and Wichita[1], leaving a very narrow area to be over 200 miles from either. Grand Island, Nebraska is 402 miles from Denver.
Pretty much all the land is over 200 miles away from a city of at least 50k population is in the great basin. To give you an idea, there are 3 cities in North Dakota (a 200x200 mile rectangle) that have a population of at least 50k, and with Bismarck relatively near the center, that rules out much of the state alone.
1: Dodge City is technically a city, but at much less than 50k population I'll omit it. If you allow anything called a city to count you could probably fit the list of people on a single piece of paper. Using the 50k cutoff you still have 3 cities in North Dakota, a 300x200 mile rectangle.
I don't think that's true, but can't quickly find evidence. Ultimately it can't be depended on and is something an EV buyer would want to verify for their region.
I'm not diminishing them, I am in awe of their work, I'm just pointing out that sometimes people (including myself) focus on esoteric projects that might have some learning opportunities, but don't directly solve a real problem (and might just be psychological mechanisms).
hi! author here! it's taken me many years to be able to self reflect enough to produce these next few words, but the answer is I have learned about myself that I perform better under pressure. I'm more consistent, reliable, and make better decisions when it's go-time. so I guess the answer to your question _should_ be yes.
although.. I've noticed I have other flaws that have burned me many times in responding to incidents. Oh well. The one I've been working on for years is "when the system is down, check the status page of the upstream provider first because it might not be your fault". I always assume it's my fault for way longer than I should.
I totally understand. When I have a project with a months timeline that I know can be done more quickly, I spend a lot of up front time exploring areas that might be helpful, or might just be interesting on their own. A friend called this process "annealing." It does bring more creative and sometimes breakthrough solutions, but I find we are in a world where development creativity is subjugated to more top down controlled ideas (which can often lead to a 'race to the bottom' for the users in a startup culture focused on exits)
In my experience and logically, goths might form in smaller centres (or cities) but will converge in cities when they pursue university, careers, or richer environments. I hung out with goths in Toronto in the 90s, some were there for fashion reasons but many were quite committed including developing their own takes (hippy-goth, techno-goth, etc). The article's use of goths as an example is stretched.
I've been using Google premium to not see ads for years now. It's great and apparently the video makers earn more too. I don't love Google's domination and some of their practices but this is pretty reasonable.
Do you need to be in that galaxy? I easily get 8h with this $1400 64GB RAM AMD Thinkpad with an OLED screen running unoptimized Ubuntu (yes it looks 90s anonymous). An equivalent notebook for most practical purposes from Apple would be at least 3.5× more expensive.
I'm curious what you mean by that? Just that it's usually a large fraction overall? At least it's apt per pixel instead of blasting LED strips and subtracting the light we don't want with LCDs.
I've initiated and contributed to a number of significant projects. My core strengths are ideation and quickly building robust proofs of concepts across diverse areas. I'm more interested in what the project is about than a get-rich plan, but sometimes the two come together. I don't need full-time work. If you need an experienced technical partner, let's connect.
I've initiated and contributed to a number of significant projects. My core strengths are ideation and quickly building robust proofs of concepts across diverse areas. I'm more interested in what the project is about than a get-rich plan, but sometimes the two come together. I don't need full-time work. If you need an experienced technical partner, let's connect.
This is still half the speed of a consumer NVidia card, but the large amounts of memory is great, if you don't mind running things more slowly and with fewer libraries.
Was this example intended to describe any particular device? Because I'm not aware of anything that operates at 8800 MT/s, especially not with 64-bit channels.
That seems unlikely given the mismatched memory speed (see the parent comment) and the fact that Apple uses LPDDR which is typically 16 bits per channel. 8800MT/s seems to be a number pulled out of thin air or bad arithmetic.
Heh, ok, maybe slightly different. But apple spec claims 546GB/sec which works out to 512 bits (64 bytes) * 8533. I didn't think the point was 8533 vs 8800.
I believe I saw somewhere that the actual chips used are LPDDR5X-8533.
Effectively the parents formula describes the M4 max, give or take 5%.
Fewer libraries? Any that a normal LLM user would care about? Pytorch, ollama, and others seem to have the normal use cases covered. Whenever I hear about a new LLM seems like the next post is some mac user reporting the token/sec. Often about 5 tokens/sec for 70B models which seems reasonable for a single user.
Is there a normal LLM user yet? Most people would want their options to be as wide as possible. The big ones usually get covered (eventually), and there are distinct good libraries emerging for Mac only (sigh), but last I checked the experience of running every kit (stable diffusion, server-class, etc) involved overhead for the Mac world.
A 24gb model is fast and ranks 3.
A 70b model is slow and 8.
A top tier hosted model is fast and 100.
Past what specialized models can do, it's about a mixture/agentic approach and next level, nuclear power scale. Having a computer with lots of relatively fast RAM is not magic.
Thanks, but just to put things into perspective, this calculation has counted 8 channels which is 4 DIMMs and that's mostly desktops (not dismissing desktops, just highlighting that it's a different beast).
Desktops are two channels of 64 bits, or with DDR5 now four (sub)channels of 32 bits; either way, mainstream desktop platforms have had a total bus width of 128 bits for decades. 8x64 bit channels is only available from server platforms. (Some high-end GPUs have used 512-bit bus widths, and Apple's Max level of processors, but those are with memory types where the individual channels are typically 16 bits.)
The vast majority of any x86 laptop or desktops are 128 bits wide. Often 2x64 bit channels up till last year or so, now 4x32 bit DDR5 in the last year or so. There are some benefits to 4 channels over 2, but generally you are still limited by 128 bits unless you buy a Xeon, Epyc, or Threadripper (or Intel equiv) that are expensive, hot, and don't fit in SFFs or laptops.
So basically the PC world is crazy behind the 256, 512, and 1024 bit wide memory busses apple has offered since the M1 arrived.
> This is still half the speed of a consumer NVidia card, but the large amounts of memory is great, if you don't mind running things more slowly and with fewer libraries.
But it has more than 2x longer battery life and a better keyboard than a GPU card ;)
reply