pull down to refresh
@optimism
292,401 sats stacked
stacking since: #879734longest cowboy streak: 68npub13wvyk...hhes6rk47y
0 sats \ 0 replies \ @optimism 55m \ parent \ on: Unfollow old and follow new Coinos account nostr
Crap I missed that because the njump only displays the date on your note, not the quoted ones.
Yes. See the difference of using it as a tool (this), and as a lazy thing where you don't read what it does (the original)?
Correct. The biases are taken out with reinforcement training. This used to be a human check but is now simply another model checking the answers: bias is currently second hand, and the bias check itself is also subject to hallucination.
I know how to automate it with AI, but, it'd be one or two weeks work because you'd want NLP combined with inverse chatbot and then work the word distance math, not chatbot spitting out bs. I'm not ready to spend that kind of time on the process yet.
Right now, I yolo'd a script to parse text from the index (where I just c&p the index entries I like), then I look up the post ID and i throw it all in a spreadsheet. Because of this manual process I get to read/bookmark every SN post I like other than the AI ones too, because I look at every title of every post of the entire week right now. So I get personal added value from doing it manually; besides that it keeps me sane to have something to not automate.
What I find intriguing is that the Italians are at the forefront of championing less regulation. That used to be the UK's and smaller more libertarian countries' roles. Keeping the FrancoGermanic love affair in check may be the the most important challenge that Europe faces.
The real problem at hand is how much trust people place in the answers they receive when working with an LLM. Maybe the best outcome is that we all get seeded with a very strong distrust of LLM outputs -- at least enough trust to check our answers once in a while.
I think that the outrage is an important counterweight to the exaggerated claims from all the LLM bosses. They just spent billions towards something both great (from a big data aggregation / achievement perspective) and mediocre (from a usability / advertised use-case fitting perspective) at the same time, and they need to reinforce the success to get even more billions to improve the latter by any means possible.
Because both traditional news and social media is saturated with the billionaires and not the boring real research, or even the yolo resulting in "hey we found something interesting" "research", the world only gets to hear the banter. I'd suggest that the outrage is even too little because which player has been decimated thus far? None. They all get billions more and then thus far, they spend it on the next model that is still mediocre, because there are no real breakthroughs (also see #1020821).
If more weight were to be given to what goes wrong, the money will potentially be spent on real improvement, not more tuning and reiterations with more input. As long as that's not the case and large scale parasitic chatbot corporations can continue to iterate on subprime results, we'll be stuck with hallucinating fake-it-till-you-make-it AI that is not fit for purpose.
Keep trying.. if it gets it right, someone at some LLM company read your post and trained it on it.
Weird questions get weird results because... those aren't part of reinforcement learning or reasoning training?
Because since the attorney admitted to:
- Use AI
- The AI hallucinated
- Didn't check before submission
I'm wondering how much this particular instance points to a bias. It would be bias if the same judge in another case would NOT sanction it - right? Until that happens, all is as it should be? No matter if you like the flavor of judge.
I am definitely a little tired of hearing how AI programmers will take my job
They won't take yours, they'll take your successor's job, simply because in a decade there will be no experienced coders in their 30s.
So don't fuck up. lol
The focus on addictive products shows their moral compass is off
The more I learn, the more I feel this becomes the fight I need to pick.
AI companies, like social media companies did before, are focused on increasing the number of monthly active users, the average session duration, etc. Those metrics, apparently inoffensive, lead to the same instrumental goal: to make the product maximally engaging.
There are two similarities to FB that I expect to apply to the large chatbot services, in particular OpenAI/Anthropic for whom this is their main product, but at a much larger scale and to a much larger effect:
- The people involved will become extremely rich off of user addiction
- As a long-term outcome, a significant subset of the users will have at some point regrets and another set will stay oblivious for a very long time and be harvested cycle after cycle. Two problems that will make the impact of the regrets much larger with chatbots:
- It actually "fixes" something, though in very bad quality, so people become dependent beyond the addiction
- It has significantly higher detrimental effect on cognitive skills and behavior; nowadays, documentaries and news items about detoxing from social media are a thing and the process is portrayed as hard. But, this will be nothing compared to detoxing chatbots.
I used to like questions/tasks about code and repo architecture because it's a choice and there is no "best" answer but there are bad ones: the ones without thought.
I'm not so sure that I would do that in 2025 anymore because it's easy now to fake a single answer and emulate thoughtfulness without actually thinking. If I were hiring now, I'd fall back to "attitude over everything" and not really test coding skills.