pull down to refresh

In this race, there is no victory. They will have the weapon, and you will have one that cannot fight them or defend you from them. The best way to win is not to use it and to encourage people not to use it by showing them how ridiculous it is. Because if they don't see how ridiculous it is, they will pay with their own freedom. This is already happening.
is just tech. People were worried about fire, trains, electricity, bitcoin... and now ai. It will be widely adopted and seemly used at the moment we will feel comfortable doing so, in the same way most of us today bring a phone in the pocket, or use a car instead of a horse.
reply
Unlike all those you mentioned, you give AI all your precise information that serves people who don't want you to be free. You give away your way of thinking, your habits, your data, your worries and weaknesses, some even give away the ways in which you keep your money like bitcoin and properties. This is ammunition for dictators and corporations who want to guide slaves into a way of thinking and remove from society those they think are dangerous.
reply
Haven't computers and a big tech be doing the same for decades now? Sucking information and drive viral ads to people aiming to spend days sliding infinite scrolls?
I simply see ai as the exponential expression of this corporate evil behavior. Yes we could have started earlier, removing tv from our homes, ad blockers in our computers. Most of the people ignore all this and now are trap using tech like heroin addicted to a feels-needed plug to society.
reply
21 sats \ 1 reply \ @optimism 20h
Haven't computers and a big tech be doing the same for decades now? Sucking information and drive viral ads to people aiming to spend days sliding infinite scrolls?
Interesting. Has it truly been that for you? For me it's been more like a Swiss army knife or multitool. Just have to be thoughtful about what you use.
Most of the people ignore all this and now are trap using tech like heroin addicted to a feels-needed plug to society.
How do we fix that? How do we empower people? What if they don't wanna?
reply
Unfortunately, and I say this with regret, not everyone wants or will have this freedom. When the normal thing is not to give a damn about your privacy, sharing your entire way of thinking and acting with completely useless technology will be the normal thing. What you can do, you do for yourself and for those you care about. Social networks and anything decentralized is good, but nothing replaces acquired knowledge. Whether it's online, through books, guides, articles and courses, or through formal education. Knowledge and human connections ennoble us.
Ps: I'm not talking about you @optimism, but because I know that people will read your question and then my answer. It's a joint construction.
reply
The dystopian technologies of fiction don't seem so fictional nowadays, just as they don't have that air of technology that we see. They're blending into normality like a symbiosis.
Haven't computers and a big tech be doing the same for decades now?
You made a good point, big tech did this long before AI. It's just that now they have the help of something that takes not only attention, but also trust, the core.
reply
So what's the way out then? Letting it pass?
reply
as if is just a trend? yes sure, there will be in the future better tech we can not even imagine today... let it pass. Using it is optional anyway.
reply
I currently just treat it as an advanced database engine that indexed the internet, with an extrapolation function. I'm kind of unhappy with the pre-applied tuning but at the same time unwilling to invest time and resources into re-training research right now, so I just test things.
The use-cases I use it for in "production", defensive summarization and speech-to-text, have not been bleeding edge for a long time. It's just nice that I can run that efficiently on my own hardware, without depending on SAAS/IAAS, now.
reply
You can do it yourself and you'll gain more knowledge by doing it. Maybe even ask a human friend for a review.
I've used AI for this and I've seen how silly it was to waste time on something I could do myself and still get out of my comfort zone. It puts you in a low-level dependency zone, modifying something that should be authentic out of a need to appear better to those who will read it, which you are not, robotic and shallow.
reply
You can do it yourself
Transcribe hours of youtube videos to make them searchable? Sure I can, but I can spend my time better. My gpu is otherwise idle, so why not?
Defensive summarization is just an anti-clickbait measure to protect against wasting time reading articles based on a title that is not corresponding to the actual content, which unfortunately is common practice nowadays. Takes under 5s of GPU time for average articles, but would take me 10 minutes + frustration for each. I don't need more frustration from clickbait, I've been frustrated for years by this.
a need to appear better to those who will read it, which you are not, robotic and shallow.
I don't need to appear better though? I don't care about appearances.
reply
This type of transcription existed in the community, but unfortunately it didn't catch on. By that I mean it didn't have to be done by you. And why do you need to summarize a video like this, when faced with situations like this the most common question I ask myself is whether it's worth it?
I don't care about appearances.
I misunderstood that you could use summarize to make it more presentable in an email or other type of communication. That's my criticism, but not yours.
Yes, you'll be fine. Especially considering that you're above average or very close to non-standard knowledge, a kind of knowledge that makes you free and immune to all kinds of bullshit that comes to steal your freedom.
Not using it is extremely feasible since you haven't needed it so far. As mentioned in the article itself, AI is reactive and not active, it depends on commands even when you put it to do repeated tasks it's just following your “from to”. There's no point arming the enemy with something so trivial when we already have software that does it and it's not LLM.
reply
There's no point arming the enemy with something so trivial
Arming "the enemy" how though?
reply
Data. It's not because it's running locally that your information is completely protected, the model is processing and being trained, what guarantee do you have that it won't share insights with the developer, or that it will do so in a future moment of carelessness during an update or through an extraction from an agent who has an interest in data like this?
Most importantly, making yourself dependent on an AI makes you open to concepts where the AI is controlling many aspects of your life.
reply
21 sats \ 1 reply \ @optimism 4 Aug
what guarantee do you have that it won't share insights with the developer
For one, because I use my inference code, not "the developer's code", but it's good to check nonetheless. I'll run some wireshark tests later this week and let everyone know if I find something fishy in things like llama.cpp or transformers.
FWIW, your concern is not without precedent; see for example #1057075 for something that does exactly what you say. This is why as a coder, using a MS IDE or a fork of it is kind of a self-own, always has been (and it is not that great quality software anyway.)
Most importantly, making yourself dependent on an AI makes you open to concepts where the AI is controlling many aspects of your life.
Have to retain the skills. This is very true. We had a discussion about this not too long ago: #998489
reply
Unfortunately, I don't have the same technical knowledge as you, so my defenses in this case are good old-fashioned staying away and listening to what the community has to say, if you find anything, please share it.
your concern is not without precedent
Good old-fashioned telemetry delivering everything that someone with good knowledge can triangulate. Terrible.
We recently had chatgpt data leaked in google results, it doesn't refer to private machines running their own models, but it's still worrying.
We had a discussion about this not too long ago
All the points raised there are very much what I observe in relation to people who make continuous use of it. Especially this one.
deleted by author