pull down to refresh
50 sats \ 0 replies \ @optimism 13h \ parent \ on: NotebookLM Workshop AI
I think most people do not verify, but "trust". When you're working as a developer on Bitcoin systems, or of any critical codebase really, red teaming something to oblivion is the default mode. Think SavageLinus(tm) shouting on the kernel mailing list: "WE DO NOT BREAK USERSPACE!" Which in normie talk means: don't eff up.
Now, if you shouldn't eff up, it means you must view every change from the perspective of the enemy (this is what red teaming is) and if you're changing your cognitive tooling by introducing something like NotebookLLM, this is super important: the tool is there to help improve your output. Worse quality cognition is not a good outcome.
In this case it was about a particular Bitcoin soft fork I wrote a piece on in the past. So it had information (and emotional context!) that was more recent than my piece and it came back in both the "podcast" and when querying. So this felt like contamination from the training data to me and thus I wrote it off.