Hacker News new | past | comments | ask | show | jobs | submit | ipsum2's comments login

Did you read the next paragraphs? It literally describes the details. I would quote the parts that respond to your question, but I would be quoting the entire post.

> This confidential computing infrastructure, built on top of a Trusted Execution Environment (TEE), will make it possible for people to direct AI to process their requests — like summarizing unread WhatsApp threads or getting writing suggestions — in our secure and private cloud environment.


A few startups [1,2] also offer infra for private AI based on confidential computing from Nvidia and Intel/AMD.

1. https://tinfoil.sh 2. https://www.privatemode.ai


We’re into “can’t prove a negative” territory here. Yes, the scheme is explained in detail, yes it conforms to cryptographic norms, yes real people work on it and some of us know some of them..

..but how can FB prove it isn’t all a smokescreen, and requests are printed out and faxed to evil people? They can’t, of course, and some people like to demand proof of the negative as a way of implying wrongdoing in a “just asking questions” manner.


Except the software running in TEEs, including the source code, is all verifiable at runtime, via a third party not controlled by Meta. And if you disagree, claim a bug bounty and become famous for exposing Meta as frauds. Or, more likely, stick with your reddit-tier zealotry and clown posting.

Why do you think that the high refresh rate is the main reason for the high price, instead of resolution or screen size?

This seems a reasonable assumption to me but I don't mean to insist I'm right.

Slow monochrome eInk panels have been around for 2 decades. Mostly built into pocket book readers, phones (like Motorola F3) and niche devices like supermarket price tags rather than computer monitors attachable with common connectors.

Okay, perhaps it's not the speed which makes them expensive, yet manufacturers and researchers mostly brag about making them faster (and more colorful) rather than making them more cheap (what I would prefer them to).


There's still the confounding variable of size and resolution (eg supermarket price tags).

or patents.

it seems like the company with e-ink patents might be like luxottica controlling the eyeglass market.

https://news.ycombinator.com/item?id=26143407


Do people find NotebookLM useful? For my use case of converting papers into podcasts, the explanations are too general (which misses the important parts of the paper) and contain too much fluff.

I suspect that changing the underlying model to Gemini 2.5 Pro would produce better transcripts, but right now there's no way of determining what model is being used.


I found this prompt online and tweaking it for audio overviews works extremely well for me.

https://open.substack.com/pub/lawsen/p/notebooklm-podcasts-b...

Generate a deep technical briefing, not a light podcast overview. Focus on technical accuracy, comprehensive analysis, and extended duration, tailored for an expert listener. The listener has a technical background comparable to a research scientist on an AGI safety team at a leading AI lab. Use precise terminology found in the source materials. Aim for significant length and depth. Aspire to the comprehensiveness and duration of podcasts like 80,000 Hours, running for 2 hours or more.


Where do you put this prompt?

In the Audio Overview, click on Customize and enter the prompt then generate the podcast.

Do I need to be on a paid version or Pro? I don't see Customize in Audio Overview in the notebook that I just tested.

Edit: I actually see Customize on a notebook where I hadn't already created a podcast. But on a notebook where I had already created one, I can't find a way to Customize. I guess I just need to create a new notebook with the same source material.


Delete generated audio, then generate a new one using customized prompt.

I find NotebookML really useful as a book reading companion, by simply uploading the same book I want to read and asking questions about it, like:

- List the characters in chapter [x] and add a small description about each one. - What's [x] device used for? - What happened in chapter [x]?

It works very well without hallucinations and referencing all the answers.


Nice little trick there, thank you.

I've found it useful for processing the documentation for our data system. The vendor provides the doc in something around 60 PDF files, and a lot of the information is poorly organized within the PDFs.

I can say, "Hey, NotebookLM, explain the difference between feature X and feature Y to me," or, "How do I configure Z to work the way we want?" And while the answers still kinda suck because the documentation is pretty shitty, it's way faster than digging through the PDFs. And it cites the PDFs so I can (with some trouble) find the actual documentation in the PDF if I need it.

The worst part of it is that it only accepts 50 PDFs at once.

Honestly, though, the best use for it I've seen was when my GM added the PDF rulebooks to our TTRPG to NotebookLM. We were then able to ask NotebookLM rules questions, and it would answer us pretty well. That's what it's really great for.

I don't care about the audio features at all. The first thing I do is close the audio pane.


It’s useful for getting summaries of long YouTube videos - I’m found it semi helpful for improving my Davinci Resolve skills.

That said Google is screwing the pooch as usual by trying to make it another walled garden. Slap an API on NoteboolLM already! The market research has already been done - there’s even an unofficial API https://www.reddit.com/r/notebooklm/comments/1eti9iz/api_for...


For YouTube videos it's hard to beat (1) copy transcript to clipboard (from eg tactiq) (2) paste into LLM chat and ask for summary

Full disclosure, I work for Google opinions are my own etc etc

The LLM built into YouTube is one of the few LLM chatbots bolted onto existing apps that I actually find useful. Not just for summaries but questions like "what is the timestamp in this 2 hour video where they talk about _____".


"LLM built into YouTube…" The what now? This is the first I've heard of this.

I thought it was for everyone my bad. Turns out except for some educational videos it's just for premium subscribers with certain ___location/language combos (you can probably guess which...)

https://support.google.com/youtube/answer/14110396?hl=en


I suspect he doesn't know he's talking about some internal tool that Google hasn't released to the public.

> "what is the timestamp in this 2 hour video where they talk about _____"

wow I gave up searching specific timestampos of long videos before. Never again.

Thank you!


Thanks, but it seems that because I am outside the USA I’m not quite “premium” enough.

how do we access this?

https://support.google.com/youtube/answer/14110396

If you're not already a premium subscriber you may want to stick with other tools. I didn't mean to unintentionally advertise YouTube Premium:)


It's hard for every AI product to beat that workflow lol. It works well for basically everything.

Or just paste the video URL onto Gemini and ask for summary, no need to search for any transcript

I've found it very useful for providing accessible introductions to technical papers that are otherwise difficult for me to get started with understanding.

If I encounter a paper that is too difficult for me to digest just by reading, then I take a step back, feed it into NotebookLM, and listen to that summary. I've only done this a few times, but so far it hasn't failed to give me the overview and momentum that I need to take another stab and successfully dig into the paper and digest it on my own.

As others have noted, it can gloss over certain details and miss important points from time to time, but overall it does a fantastic job of giving me an introduction to a complex topic and making it far less indimidating / overwhelming.


You can enter a prompt from the "customize" dialog. Have you tried asking for a more specifics, assume the audience is an expert on the subject, and cut down on the fluff?

I've run them on my own papers and, while sometimes they are accurate, they are sometimes very very wrong and misrepresent things. And I don't mean in nuanced or unimportant ways.

The TTS is amazing, but the audio overviews are frankly useless for me.


I used it for a a bootcamp class to study for an exam. I recorded about 50 hours of lecture and Q&A, and was able to generate good Anki cards from it. What was awesome was that I could ask “make a list of all of the topics the instructor thought would be questions on the exam” and it did a great job at that.

The podcast thing is more a novelty to me.


What’s interesting is that the create podcast thing is just a feature of NotebookLM. But everyone thinks that’s what NotebookLM is

It seems to be the only unique feature though. Any LLM can summarize things for me or make bullet points.

If you have a corpus of documents you are working with (say thousands of pages of related standards docs), Notebook can be handy for doing targeted summaries of aspects of the docs with pointers back into the actual docs to the relevant source material. That's something I end up needing a lot (I've never used the podcast feature) and so it feels very differentiated to me...

The one other unique thing I use from them is the interactive mind maps. Like a table of contents on steroids

I use it for loading up source materials and notes for a DnD campaign I run. Then I ask it questions when I need off the cuff answers, instead of researching.

It's also good for when I can't think of anything (like a background NPCs name and backstory)


I haven't really found it interesting for technical content but do think it's somewhat useful for hashing out more subjective and/or personal things like goals, difficulties, conflict, etc.

At Shopify I working as an engineer in financial services and certain changes required approval by our banking partners. I was able to upload our credit policies to NotebookLM and easily ask questions without having to ping our the legal team in Slack. I'm about as bearish as they come as far as AI tools go and NotebookLM was one of the few tools that felt useful to me straight away.

The fact that this argument is made by the editor in chief of The Verge really shows that logic is not priority for their journal.


My take from the article isn’t that it’s really expected from Amazon. It’s calling out the hypocrisy of what Jeff has done at the Post.


It was mentioned briefly. https://ai.meta.com/sam3

Sapling is actively developed, not "dead after 3 months": https://github.com/facebook/sapling/commits/main/

Have not tried building Buck2 (no personal use for it), but its also actively developed: https://github.com/facebook/buck2/commits/main/


Sorry I didn't mean it's not dead but it really hasn't got as much feature support. Things like LFS support got deproritized just because the internal team asking for it got a different feature.

Both are EXTREMELY active but only for the needs of Meta and not for the community.

Adoption outside of Meta is nearly non-existent because of this.

Look at something like Jujitsu. instead of Sapling and you can see a lot more community support, a lot more focus on features that are needed for everyone (still no LFS support, but it wasn't because Google didn't need it).

I guess I don't consider a larger number of commits as actively supporting the community. The community use is second place and the open source is just a one time boost to recruiting PR.


It's literally working? What did you expect?

Maybe it's working but it's not useful.

What would you expect "useful" to be if your codebase is basically incompatible with type checking?

I expected that I would be able to run the check command and it would just work. Upon reading the docs, this tool recommends incremental adoption, and after using `--suppress-errors`, `# pyrefly: ignore` is all over my codebase.

I know I shouldn't get into this thread, but I'm extremely curious: what did you expect should have happened? I mean, literally, what do you mean by "just work", what work did you expect a type checker to perform if not to show you errors?

I expected a reasonable amount of reasonable errors, not 60k+ errors most of which are import-error, even though the code runs fine.

edit: most errors weren't actually import-error, I just misunderstood --search-path


Its the fault of the tool that the codebase had lots of errors?

I would say it's the fault of the ecosystem. Hopefully it will get better as libraries adopt type hints.

That's incremental adoption for adding type hints, not for adopting this particular tool in an already-type-hinted codebase.

…how else would it work?

What does "just work" mean? You are comically obtuse.

The performance is basically so bad it's unusable though, segmentation models and object detection models are still the best, for now.


LLMs could not use tools on day one.


Looks like a Claude Code clone.


But open source like aider


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: