Hacker News new | past | comments | ask | show | jobs | submit | bshacklett's comments login

If they bother. The vast majority of appointments I’ve had, in recent memory, are the provider typing a bit on their laptop, then sending me to someone else.

I've noticed the last few times I've went, they've just copy and pasted things like my weight and height from previous appointments. My dog gets better treatment at the vet.

If you don't like your doctor, go to someone else

Indeed but a tiring and expensive game when it takes 4-5 tries with experienced specialists to get an actual diagnosis.

One of the more exciting AI use-cases is that it should be about competent to handle the conversational parts of diagnosis; it should have read all the studies and so it'll be possible to spend an hour at home talking to an AI and then turn up at the doctor with a checklist of diagnostic work you want them to try.

A shorter amount of expensive time with a consultant is more powerful if there is a solid reference to play with for longer before hand.


AI has a long way to go before it can serve as a trustworthy middleman between research papers and patients.

For instance, even WebMD might waste more time in doctor's offices than it saves, and that's a true, hallucination-free source, written specifically to provide lay-people with understandable information.


This study found that an LLM outperformed doctors "on a standardized rubric of diagnostic performance based on differential diagnosis accuracy, appropriateness of supporting and opposing factors, and next diagnostic evaluation steps, validated and graded via blinded expert consensus."

https://jamanetwork.com/journals/jamanetworkopen/fullarticle...


This study is about doctors using an LLM and it doesn't seem like it made them significantly more accurate than doctors not using LLM.

If you look in the discussion section you'll find that wasn't exactly what the study ended up with. I'm looking at the paragraph starting:

> An unexpected secondary result was that the LLM alone performed significantly better than both groups of humans, similar to a recent study with different LLM technology.

They suspected that the clinicians were not prompting it right since the LLM without humans was observed to be outperforming the LLM with skilled operators.


Exactly - if even the doctors/clinicians are not "prompting it right," then what are the odds that the layperson is going to get it to behave and give accurate diagnoses, rather than just confirm their pre-existing biases?

Ah right, very interesting, thank you.

In some countries like Canada you basically don’t have an option.

Really? They just tell me it's stress, the prescribe me chinese medicine just in case and send me away.

Damn, they just tell me I’m getting old and wish me luck.

That’s a big breaking change around a brand new feature. I’m sure it could be done well, but it gives me the shivers.

You add a new API that takes templates only leaving the existing API in place. You (some releases later) deprecate the string API. You (some releases later, with clear advance warning of when it is coming) actually remove the deprecated API. "It's a big breaking change around a brand new feature", yeah, so you don't break anything around a brand new feature, it's not like this kind of transition is a new concept.

much better would be execute_template(t"...")

> if the rendering engine and network fetching were easily separable - and you could insert your own steps into that pipeline, you could do all sorts of neat stuff.

Can’t that be done relatively easily with https://mitmproxy.org/?


Heh, I know I'm late to the party here - but I actually separate the rendering and network in BrowserBox by running the browser on a remote server and the interface is just a web app on your device.

Doubly relevant to eInk mode as soon be releasing a text-mode browser which has a verys imilar goal: make the web less noisy, and easier to focus on.

We try that by putting the web in the terminal. Technically it's similar to how the CloudFlare/S2 remote browser works in that we take rendering instructions from the remote browser and display in your terminal, except instead of rendering everything from the browser engine, we only render the post-layout text boxes! To do this you need to apply a secondary layout pass account for overlaps due to monospace font sizes, and scaling to terminal dimensions.

That's why I'm thinking that people interested in eInk mode might care about this browser too, a quick demo of which is here: https://youtu.be/Xmt6j-nfc7E

If you'd like to be among the first to test it out when ready please add your name / email to this list: https://tally.so/r/wbzYzo


It’s akin to fashion designers sending models out in burlap sacks.


What’s more strange is that we’ve generally decided that adults aren’t “allowed”, or supposed to enjoy fun things.


How does this cut off the grassroots internet?


It makes end to end responsibility more cumbersome. There were days people just stored MS Frontpage output on their home server.


Many folks switched to Lets Encrypt ages ago. Certificates are way easier to acquire now than they were in "Frontpage' days. I remember paying 100's of dollars and sending a fax for "verification."


Do they offer any long term commitment for the API though. I remembered that they were blocking old cert manager clients that were hammering their server. You can't automate that (as it could be unsafe, like Solarwinds) and they didn't give one year window to do it manually either.


You do have a point. I still feel that upgrading your client is less work than manual cert renewals.


I agree, but I think the pendulum just went too far on the tradeoff scale.


I've done the work to set up, by hand, a self-hosted Linux server that uses an auto-renewing Let's Encrypt cert and it was totally fine. Just read some documentation.


> you look like a gargoyle

I'm glad to see that at least someone, here, reads classic literature.

More seriously, there is something truly disturbing about someone's eyes not being visible. This definitely crosses a social boundary or two.


The end-user relationship with Adobe feels a lot more like a mob protection racket than free market capitalism.


That tracks. Ruby followed in the footsteps of Perl, which had string manipulation as a main priority for the language.


Does perl have lazy string processing? And I'm not talking about a coderefs hack.


Major props to the author for actually caring about this. JS seems like an assumption more than ever before. It’s great to see that people understand that some of us don’t want it.

Very loosely related: why do so many websites use require Javascript to handle links? I’m so tired of trying to figure out how to open a given link in a new tab. :-(


and scrolling. and history.

I am finding myself afraid of clicking any link, anchor, or button in the fear of losing the current position, word wrapping or the current form inputs.

It's like I am being pavlov trained to be afraid of interacting with my computer.


Mystery meat interaction is one of the worst parts of the modern web, and all too often it’s found in places where there is little to no user benefit to be had for the tradeoff and a plain old HTML link or button would’ve sufficed. I wish more devs would give thought to this kind of thing.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: