pull down to refresh
@optimism
480,697 sats stacked
stacking since: #879734longest cowboy streak: 101npub13wvyk...hhes6rk47y
0 sats \ 0 replies \ @optimism 13m \ parent \ on: Chatbots aren’t telling you their secrets AI
Yeah I read that, it's so wrong it's hilarious. The LLM is served by a runtime, it is not self-serving. There is no shut off button inside the language model.
The researcher should do some research before they make wild claims.
intentionally
What intent? If it's intentional, it's been programmed, so it's a human that is intentionally doing it. A database has no intent.
I'd say the governments have a disadvantage, because they're slow to adapt. Corporate beats government always.
However, let's assume that "agents" become real. All one would need to do is gain access to a GPU farm and run an agent botnet. I'd even go as far to argue that because the current tech is inefficient and expensive, it's more likely that there will be "resistance" that does this. Opportunity/wealth inequality tends to breed activism, and in the modern age, hacktivism.
Move fast & break things mentality, that is broadly adapted throughout the AI industry, will help achieving this for the coming era of activism, by making (a) really poorly designed solutions - basically anything in AI right now - and (b) not paying much attention to actual security.
Cracking down never helps, and embracing the scams is dangerous.
What I'd want to do in their place is shape the use of LLMs - not forbid it, not encouraging slop. I think that it is important to teach non-end-user topics: how it works, training, finetuning, modifying.
If students graduate only to digress into having a relationship with Elon's virtual horny girlfriend (#1042803), or asking "grok is this true", or whatever else dumb shit normiespace does at that time, we have a real problem because it means they have not been educated on what LLMs are.
That's what I mean. They want the installed base so that they can capture everyone's browsing data.
since the Government is making you get rid of it anyway.
This is actually a really good point, that might explain why Google is reducing effort on AOSP and Chromium, after reorganizing it into a distinct business unit last year
This is awesome; maybe I can get rid of my script now that slices audio in 1 minute chunks and feeds them to whisper.
Sama redefines
intelligence
as "storing and querying knowledge", because that's the product he has. Something humans have lost the race from the internet since that gained traction some 2 decades ago, which is the source of his product.I'd argue that kids that grow up with these tools available from day 1 will be able to be much smarter than us.
Since LLMs are deterministic (it has an absolute set of weights that doesn't get updated) there are some randomizers involved to make chatbots not repeat themselves and make them more "human".
So for example, it checks how often it said "yes", and if it matches some threshold it will not say "yes" again. To make it even more "human", all these thresholds are dynamic, and the window in which it is evaluated is often dynamic too.
Most of this is controlled by
temperature
, which "globally" scales how much randomness is used. Lower values allow for less randomness. You may want to play around with this (I'm quite sure I've seen that in Venice chat settings.)Depending on your model used, there are recommended values. If these don't work for your use-case because of too many hallucinations, try lowering them. I.e if you have 0.5 now, try 0.45 or 0.4.
Check the Core code for
MAX_MONEY
comparisons, as that was a lesson learned since iirc the integer overflow bug in 2010 (before my time.)However, this only L1. Someone can perhaps inflate paper bitcoin and probably get away with it. I remember something about most of the wrapped BTC on various chains being tokens of tokens of tokens; the further a paper bitcoin is from L1, the higher the chance for bugs.