The call to action button says "Get Started for Free", while the pricing page lists $20/month.
Clicking the get started button immediately wants me to sign up with github.
Could you explain on the pricing page (or just to me) what the 'free' is? I'm assuming a trial of 1 month or 1 PR?
I'm somewhat hesitant to add any AI tooling to my workflows, however this is one of the use cases that makes sense to me. I'm definitely interested in trying it out, I just think its odd that this isn't explained anywhere I could find.
> because it's 2025—Zed can ask your preferred LLM to write the message for you
Zed seems to have a lot going for it. Though the everpresent AI push has me concerned to invest my time into learning its intricacies, for fear of it devolving into Yep Another AI Editor.
So far it’s all optional stuff. I think the editor is worth trying purely for how snappy the UI is, whilst not really missing any of the nice major features of VS Code that I actually use. Even without the AI stuff, it is an editor with a lot of potential.
I'm in the same boat as the person you're responding to. I really don't understand how to get anything helpful out of ChatGPT, or more than anything basic out of Claude.
> I've found that if you treat it more like a colleague, it works wonderfully.
This is what I've been trying to do. I don't use LLM code completion tools. I'll ask anything from how to do something "basicish" with html & css, and it'll always output something that doesn't work as expected. Question it and I'll get into a loop of the same response code, regardless of how I explain that it isn't correct.
On the other end of the scale, I'll ask about an architectural or design decision. I'll often get a response that is in the realm of what I'd expect. When drilling down and asking specifics however, the responses really start to fall apart. I inevitably end up in the loop of asking if an alternative is [more performant/best practice/the language idiomatic way] and getting the "Sorry, you're correct" response. The longer I stay in that loop, the more it contradicts itself, and the less cohesive the answers get.
I _wish_ I could get the results from LLMs that so many people seem to. It just doesn't happen for me.
My approach is a lot of writing out ideas and giving them to ChatGPT. ChatGPT sometimes nods along, sometimes offers bad or meaningless suggestions, sometimes offers good suggestions, sometimes points out (what should have been) obvious errors or mistakes. The process of writing stuff out is useful anyway and sometimes getting good feedback on it is even better.
When coding I will often find myself in kind of a reverse pattern from how people seem to be using ChatGPT. I work in a jupyter notebook in a haphazard way getting things to functional and basically correct, after this I select all, copy, paste, and ask ChatGPT to refactor and refine to something more maintainable. My janky blocks of code and one offs become well documented scripts and functions.
I find a lot of people do the opposite, where they ask ChatGPT to start, then get frustrated when ChatGPT only goes 70% of the way and it's difficult to complete the imperfectly understood assignment - harder than doing it all yourself. With my method, where I start and get things basically working, ChatGPT knows what I'm going for, I get to do the part of coding I enjoy, and I wind up with something more durable, reusable, and shareable.
Finally, ChatGPT is wonderful in areas where you don't know very much at all. One example, I've got this idea in my head for a product I'll likely never build - but it's fun to plan out.
My idea is roughly a smart bidet that can detect metabolites in urine. I got this idea when a urinalysis showed I had high levels of ketones in my urine. When I was reading about what that meant I discovered it's a marker for diabetic ketoacidosis (a severe problem for ~100k people a year) and it can also be indicator for colorectal cancer as well as indicating a "ketosis" state that some people intentionally try to enter for dieting or wellness reasons. (My own ketones were caused by unintentionally being in ketosis, I'm fine, thanks for wondering.)
Right now, you detect ketones in urine with a strip that you pee on, and that works well enough - but it could be better because who wants to use a test strip all the time? Enter the smart bidet. The bidet gives us an excuse to connect power to our device and bring the sensor along. Bluetooth detects a nearby phone (and therefore identity of the depositor), a motion sensor can detect a stream of urine triggering our detection, and then use our sensor to detect ketones which we track overtime in the app, ideally with additional metabolites that have useful diagnostic purposes.
How to detect ketones? Is it even possible? I wonder to ChatGPT if spectroscopy is the right method of detection here. ChatGPT suggests a retractable electrochemical probe similar to an extant product that can detect a kind of ketone in blood. ChatGPT knows what kind of ketone is most detectable in urine. ChatGPT can link me to scientific instrument companies that make similar (ish) probes where I could contact them and ask if they sold this type of thing, and so on.
Basically, I go from peeing on a test strip and wondering if I could automate this to chat with ChatGPT - having, what was in my opinion, an interesting conversation with the LLM, where we worked through what ketones are, the different kinds, the prevalence of ketones in different bodily fluids, types of spectroscopy that might detect acetoacetate (available in urine) and how much that would cost and what challenges would be and so on, followed by the idea of electrochemical probes and how retracting and extending the probe might prolong its lifespan and maybe a heating element could be added to dry the probe to preserve it even better and so on.
Was ChatGPT right about all that? I don't know. If I were really interested I would try to validate what it said, and I suspect I would find it was mostly right and incomplete or off in places. Basically like having a pretty smart and really knowledgeable friend who is not infallible.
Without ChatGPT I would have likely thought "I wonder if I can automate this", maybe googled for some tracking product, then forgot about it. With ChatGPT I quickly got a much better understanding of a system that I glancingly came into conscious contact with.
It's not hard to project out that level of improved insight and guess that it will lead to valuable life contributions. In fact, I would say it did in that one example alone.
The urinalysis (which was combined with a blood test) said something like "ketones +3" and if you google "urine ketones +3" you get a explanations that don't apply to me (alcohol, vigorous exercise, intentional dieting) or "diabetic ketoacidosis" which google warns you is a serious health condition.
In the follow up with the doctor I asked about the ketones. The doctor said "Oh, you were probably just dehydrated, don't worry about it, you don't have diabetic ketoacidosis" and the conversation moved on and soon concluded. In the moment I was just relieved there was an innocent explanation. But, as I thought about it, shouldn't other results in the blood or urine test indicate dehydration? I asked ChatGPT (and confirmed on Google) and sure enough there were 3 other signals that should have been there if I was dehydrated that were not there.
"What does this mean?" I wondered to ChatGPT. ChatGPT basically told me it was probably nothing, but if I was worried I could do an at home test - which I didn't even know existed (though I could have found through carefully reading the first google result). So I go to Target and get an at home test kit (bottle of test strips), 24 gatorades, and a couple liters of pedialyte to ensure I'm well hydrated.
I start drinking my usual 64 ounces of water a day, plus lots of gatorade and pedialyte and over a couple days I remain at high ketones in urine. Definitely not dehydrated. Consulting with ChatGPT I start telling it everything I'm eating and it points out that I'm just accidentally in a ketogenic diet. ChatGPT suggests some simple carbs for me, I start eating those, and the ketone content of my urine falls off in roughly the exact timeframe that ChatGPT predicted (i.e. it told me if you eat this meal you should see ketones decline in ~4 hours).
Now, in some sense this didn't really matter. If I had simply listened to my doctor's explanation I would've been fine. Wrong, but fine. It wasn't dehydration, it was just accidentally being in a ketogenic diet. But, I take all this as evidence of how ChatGPT now, as it exists, helped me to understand my test results in a way that real doctors weren't able to - partially because ChatGPT exists in a form where I can just ping it with whatever stray thoughts come to mind and it will answer instantly. I'm sure if I could just text my doctor those same thoughts we would've come to the same conclusion.
I believe the smart bidet was an idea some Japanese researchers developed some years ago. Maybe this one was geared to detecting blood in faeces. Whatever,the approach you describe has a huge number of possibilities for alerting us to health problems without even having to think about them on a daily basis. A huge advantage. On the other hand this is a difficult one to implement bearing in mind the kinetics involved.
I have spent the last couple of days playing with Copilot X Chat, to help me learn Ruby on Rails. I'd have thought that Rails would be something it would be competent with.
My experience has been atrocious. It makes up gems and functions. Rails commands it gives are frequently incorrect. Trying to use it to debug issues results in it responding with the same incorrect answer repeatedly, often removing necessary lines.
I only found out about this benefit of iCloud+ a few days ago, thankfully a few days before my prior solution was due to renew for another 2 years at a vastly more expensive rate.
Certainly easy to set up. DNS with CloudFlare and it was able to do it all with just a login confirmation from my side of things.
Which popups are these? The only one I ever received was when I hit the limit on my iCloud account. Certainly not anywhere as bad as all the adverts in teh start menu on my Windows work machine.
This is exactly what I have been going through for the last year or two. I even changed jobs, finding a role that was supposed to be better. At a company that would allow my skills to improve, while having what I assumed would be a better run company.
Unfortunately the new company is so full of corporate BS that I'm finding it even harder to get through each day. I genuinely feel like there are staff who are hired to 'improve productivity' through implementing Agile company wide, are actually doing everything in their power to slow things down. I've never seen this amount of unneeded meetings in my calendar, all in the name of 'planning'.
Because the market is tiny. Recruiters can't specialize, and those that do get eaten up by the likes of Datacom or Australian-based providers.
Most recruiters in NZ start from labourer/contracting/HR firms and then move into tech because it's better paid. Whereas in Australia you get people who trained specifically to be a tech recruiter, or migrated to recruitment from tech (usually BA and QA type roles).
Clicking the get started button immediately wants me to sign up with github.
Could you explain on the pricing page (or just to me) what the 'free' is? I'm assuming a trial of 1 month or 1 PR?
I'm somewhat hesitant to add any AI tooling to my workflows, however this is one of the use cases that makes sense to me. I'm definitely interested in trying it out, I just think its odd that this isn't explained anywhere I could find.