Did you read the next paragraphs? It literally describes the details. I would quote the parts that respond to your question, but I would be quoting the entire post.
> This confidential computing infrastructure, built on top of a Trusted Execution Environment (TEE), will make it possible for people to direct AI to process their requests — like summarizing unread WhatsApp threads or getting writing suggestions — in our secure and private cloud environment.
We’re into “can’t prove a negative” territory here. Yes, the scheme is explained in detail, yes it conforms to cryptographic norms, yes real people work on it and some of us know some of them..
..but how can FB prove it isn’t all a smokescreen, and requests are printed out and faxed to evil people? They can’t, of course, and some people like to demand proof of the negative as a way of implying wrongdoing in a “just asking questions” manner.
Except the software running in TEEs, including the source code, is all verifiable at runtime, via a third party not controlled by Meta. And if you disagree, claim a bug bounty and become famous for exposing Meta as frauds. Or, more likely, stick with your reddit-tier zealotry and clown posting.
This seems a reasonable assumption to me but I don't mean to insist I'm right.
Slow monochrome eInk panels have been around for 2 decades. Mostly built into pocket book readers, phones (like Motorola F3) and niche devices like supermarket price tags rather than computer monitors attachable with common connectors.
Okay, perhaps it's not the speed which makes them expensive, yet manufacturers and researchers mostly brag about making them faster (and more colorful) rather than making them more cheap (what I would prefer them to).
Do people find NotebookLM useful? For my use case of converting papers into podcasts, the explanations are too general (which misses the important parts of the paper) and contain too much fluff.
I suspect that changing the underlying model to Gemini 2.5 Pro would produce better transcripts, but right now there's no way of determining what model is being used.
Generate a deep technical briefing, not a light podcast overview. Focus on technical accuracy, comprehensive analysis, and extended duration, tailored for an expert listener. The listener has a technical background comparable to a research scientist on an AGI safety team at a leading AI lab. Use precise terminology found in the source materials. Aim for significant length and depth. Aspire to the comprehensiveness and duration of podcasts like 80,000 Hours, running for 2 hours or more.
Do I need to be on a paid version or Pro? I don't see Customize in Audio Overview in the notebook that I just tested.
Edit: I actually see Customize on a notebook where I hadn't already created a podcast. But on a notebook where I had already created one, I can't find a way to Customize. I guess I just need to create a new notebook with the same source material.
I've found it useful for processing the documentation for our data system. The vendor provides the doc in something around 60 PDF files, and a lot of the information is poorly organized within the PDFs.
I can say, "Hey, NotebookLM, explain the difference between feature X and feature Y to me," or, "How do I configure Z to work the way we want?" And while the answers still kinda suck because the documentation is pretty shitty, it's way faster than digging through the PDFs. And it cites the PDFs so I can (with some trouble) find the actual documentation in the PDF if I need it.
The worst part of it is that it only accepts 50 PDFs at once.
Honestly, though, the best use for it I've seen was when my GM added the PDF rulebooks to our TTRPG to NotebookLM. We were then able to ask NotebookLM rules questions, and it would answer us pretty well. That's what it's really great for.
I don't care about the audio features at all. The first thing I do is close the audio pane.
It’s useful for getting summaries of long YouTube videos - I’m found it semi helpful for improving my Davinci Resolve skills.
That said Google is screwing the pooch as usual by trying to make it another walled garden. Slap an API on NoteboolLM already! The market research has already been done - there’s even an unofficial API https://www.reddit.com/r/notebooklm/comments/1eti9iz/api_for...
Full disclosure, I work for Google opinions are my own etc etc
The LLM built into YouTube is one of the few LLM chatbots bolted onto existing apps that I actually find useful. Not just for summaries but questions like "what is the timestamp in this 2 hour video where they talk about _____".
I thought it was for everyone my bad. Turns out except for some educational videos it's just for premium subscribers with certain ___location/language combos (you can probably guess which...)
I've found it very useful for providing accessible introductions to technical papers that are otherwise difficult for me to get started with understanding.
If I encounter a paper that is too difficult for me to digest just by reading, then I take a step back, feed it into NotebookLM, and listen to that summary. I've only done this a few times, but so far it hasn't failed to give me the overview and momentum that I need to take another stab and successfully dig into the paper and digest it on my own.
As others have noted, it can gloss over certain details and miss important points from time to time, but overall it does a fantastic job of giving me an introduction to a complex topic and making it far less indimidating / overwhelming.
You can enter a prompt from the "customize" dialog. Have you tried asking for a more specifics, assume the audience is an expert on the subject, and cut down on the fluff?
I've run them on my own papers and, while sometimes they are accurate, they are sometimes very very wrong and misrepresent things. And I don't mean in nuanced or unimportant ways.
The TTS is amazing, but the audio overviews are frankly useless for me.
I used it for a a bootcamp class to study for an exam. I recorded about 50 hours of lecture and Q&A, and was able to generate good Anki cards from it. What was awesome was that I could ask “make a list of all of the topics the instructor thought would be questions on the exam” and it did a great job at that.
If you have a corpus of documents you are working with (say thousands of pages of related standards docs), Notebook can be handy for doing targeted summaries of aspects of the docs with pointers back into the actual docs to the relevant source material. That's something I end up needing a lot (I've never used the podcast feature) and so it feels very differentiated to me...
I use it for loading up source materials and notes for a DnD campaign I run. Then I ask it questions when I need off the cuff answers, instead of researching.
It's also good for when I can't think of anything (like a background NPCs name and backstory)
I haven't really found it interesting for technical content but do think it's somewhat useful for hashing out more subjective and/or personal things like goals, difficulties, conflict, etc.
At Shopify I working as an engineer in financial services and certain changes required approval by our banking partners. I was able to upload our credit policies to NotebookLM and easily ask questions without having to ping our the legal team in Slack. I'm about as bearish as they come as far as AI tools go and NotebookLM was one of the few tools that felt useful to me straight away.
Sorry I didn't mean it's not dead but it really hasn't got as much feature support. Things like LFS support got deproritized just because the internal team asking for it got a different feature.
Both are EXTREMELY active but only for the needs of Meta and not for the community.
Adoption outside of Meta is nearly non-existent because of this.
Look at something like Jujitsu. instead of Sapling and you can see a lot more community support, a lot more focus on features that are needed for everyone (still no LFS support, but it wasn't because Google didn't need it).
I guess I don't consider a larger number of commits as actively supporting the community. The community use is second place and the open source is just a one time boost to recruiting PR.
I expected that I would be able to run the check command and it would just work. Upon reading the docs, this tool recommends incremental adoption, and after using `--suppress-errors`, `# pyrefly: ignore` is all over my codebase.
I know I shouldn't get into this thread, but I'm extremely curious: what did you expect should have happened? I mean, literally, what do you mean by "just work", what work did you expect a type checker to perform if not to show you errors?
> This confidential computing infrastructure, built on top of a Trusted Execution Environment (TEE), will make it possible for people to direct AI to process their requests — like summarizing unread WhatsApp threads or getting writing suggestions — in our secure and private cloud environment.
reply