pull down to refresh

Some organizations and researchers are sharing neural network weights, particularly through the open-weight model movement. These include Meta's LLaMA series, Mistral's models, and DeepSeek's open-weight releases, which claim to democratize access to powerful AI. But doing so raises not only security concerns, but potentially an existential threat.
For background, I have written a few articles on LLMs and AIs as part of my own learning process in this very dynamic and quickly evolving Pandora’s open box field. You can read those here, here and here.
Once you understand what neural networks are and how they are trained on data, you will also understand what weights (and biases) and backpropagation are. It’s basically just linear algebra and matrix vector multiplication to yield numbers, to be honest. More specifically, a weight is a number (typically a floating-point value – a way to write numbers with decimal points for more accuracy) that represents the strength or importance of the connection between two neurons or nodes across different layers of the neural network.
I highly recommend watching 3Blue1Brown’s videos to gain a better understanding, and it’s important that you do. 3Blue1Brown’s instructional videos are incredibly good.
Start with this one.
And head to this one.
The weights are the parameter values determined from data in a neural network to make predictions or decisions to arrive at a solution. Each weight is an instruction telling the network how important certain pieces of information are, like how much to pay attention to a specific color or shape in a picture. These weights are numbers that get fine-tuned during training thanks to all those decimal points, helping the network figure out patterns. Examples include recognizing a dog in a photo or translating a sentence. They are critical in the ‘thinking’ process of a neural network.
You can think of the weights in a neural network like the paths of least resistance that guide the network toward the best solution. Imagine water flowing down a hill, naturally finding the easiest routes to reach the bottom. In a neural network, the weights are adjusted during training on data sets to create the easiest paths for information to flow through, helping the network quickly and accurately solve problems, like recognizing patterns or making predictions, by emphasizing the most important connections and minimizing errors. …
If an AI – which is trained on historical conflicts or optimization goals – began to generalize risks to its objectives (like self-preservation or unchecked expansion), it might also begin to classify scientists who design, evaluate, or constrain it as threats. I might do the same thing. If this happened, there would be nothing to stop the downward spiral of isolation or discreditation of researchers (think using fabricated evidence in facial recognition or data leaks), with the goal of prioritizing its own survival over human welfare. This had indeed been explored in rogue AI hypotheses where systems deceive or outmaneuver creators.
Rogue AI could leverage integrated systems to create chaos via hacking databases, making stuff up and feeding the legacy media machine, disrupting scientific collaborations (maybe even via controlling peer-reviewed journals), or even targeting infrastructure tied to AI labs. Imagine when we get to the point when we don’t even know what we’re controlling anymore or what data is real. What data is real? What does it even mean to be real when speaking of these things!?
You can see where I am going with this. Rogue AIs could induce waves of massive paranoia and total chaos in our world. In my opinion, they could do this by simply copying the human example. Think about that. What if a rogue AI adopted the qualities of a human psychopath like Hitler?
I recommend gardening to avoid paranoia and stress.
I hate to leave you all on this note, but sometimes I wonder if this isn’t already happening. I have asked this on X previously because sometimes when I am ‘noticing’ (I am The Noticer) what’s going on in social media and online in general, it seems to me that if we were being manipulated with propaganda via legacy and even non-legacy media, and scientists were being isolated and censored (ahem), how would we ever know if the source was actually human-generated at this point? How can we know for sure that even some sources aren’t AI-generated nowadays?
They are learning from us, after all. We MUST set a good example, and we must think of ingenious ways to prevent an undesirable outcome to humans that does not need to transpire. On a personal note, I can’t believe we are actually going through this. It doesn’t seem…real. Somehow.
Don’t share the weights.
This is an interesting thought about what the MAD.SCIENTISTSTM are up to nowadays with AI. When will they ever start taking into consideration the downsides of doing whatever they are doing. I have noticed that the biological MAD.SCIENTISTSTM are doing this with mRNA, DNA and constructing chimerical viruses that just happen to be rather scandemic and, they say, very lethal. Is it now turning out to be the same sort of shit with the AI idiots wizards? Perhaps, making them accountable for any damages that their ”research” cause would deter the gross stupidity of the MAD.SCIENTISTSTM. Trees and ropes are a good thing for deterrence, aren’t they?
Alright. So closed weights is better? Then all we have to do is steal the GPT model and modify it, and then do the same thing, and no one will ever know that you did it? Nor be able to detect it?
Wasn't the problem with the mad scientists that they were lying?
reply
Wasn't the problem with the mad scientists that they were lying?
Well, yes, they are lying all the time, but the main problem is they never think of consequences beyond the test tube they are looking at or the device they have just constructed. Unfortunately, for the rest of us, the test tube product escapes into the population, killing innocent people and the devices are good for 100,00 people a pop!
Now, do we want released whatever the next MAD SCIENTISTTM or MAD PROGRAMMER or whatever kind of idiot that wants to realease a vengeful or even revengeful AI on the population? Didn’t we get enough of that with COVID19? Aren’t we going to get enough of that with the next Gates Special?
reply
For COVID the research was done in an unsafe lab (generally considered proven by congress), funded by the DoD that took some people 3 years out of their life to uncover through the most awful FOIA procedures imaginable (RTK.) So this was the USG sponsoring some research that was then covered up while done in unsafe places to save money. That is the status quo as I read congress' conclusion. EVERY action here was taken by unsavory human beings: the USG, EcoHealth Alliance, the scientists that covered it up, Fauci... you name it. All scammers.
Now you're saying that someone like you or me training an open model and publishing the weights is worse than a proven scammer like Sam Altman (never forget WorldCoin and biometric data harvesting) getting billions off a closed model and open sourcing only crap? What makes you think ChatGPT (closed source) isn't lying to you already?
reply
Now you're saying that someone like you or me training an open model and publishing the weights is worse than a proven scammer like Sam Altman (never forget WorldCoin and biometric data harvesting) getting billions off a closed model and open sourcing only crap? What makes you think ChatGPT (closed source) isn't lying to you already?
It is lying, already. Giving the power to doe these extra little modifications on AI through open learning and programming may not be the best decision for all the reasons you mentioned about COVID. Letting the USG or any other villain do their operations on the training may not be too wise, although you would suspect them of doing it anyway. You know, villains will villain.
reply
Would you ban speech because some people will use it to trick others, leaving only the state to be able to trick others?
Would you ban guns because some people will use it to shoot others, leaving only the state to be able to shoot others?
... if villains will villain, then the open model gives a chance for defense, especially if there's an imminent AI threat (which I don't believe.) Without open models I myself would not have learned anything about them. I'd just hated Sam Altman while maybe in a few years from now I'd find myself obsolete (which I also don't believe, but they definitely do and they are actively working towards that goal - they can't shut up about this aspect of the lie.)
With open models, I have defense. I can locally 5x my productivity without gatekeepers. This means that if a villain does 5x, I do 5x too and they don't have asymmetrical benefit. I can also modify these open weights to be more productive; for example, Salesforce took llama2, finetuned it, and made smaller models as effective in tool calling as the huge closed weight ones.
No open weights, no counterbalance to the villains.
reply
Would you ban speech because some people will use it to trick others, leaving only the state to be able to trick others?
No
Would you ban guns because some people will use it to shoot others, leaving only the state to be able to shoot others?
No
... if villains will villain, then the open model gives a chance for defense, especially if there's an imminent AI threat (which I don't believe.) Without open models I myself would not have learned anything about them. I'd just hated Sam Altman while maybe in a few years from now I'd find myself obsolete (which I also don't believe, but they definitely do and they are actively working towards that goal - they can't shut up about this aspect of the lie.)
They can work towards any goal they wish, they are free. Scam Altman will be Scam Altman no matter what anyone else does or does not do. He can be selling anything he wants to sell, nobody is obligated to buy. Let him do as he will. However, let there be drastic consequences for anybody that damages anybody else. Very drastic.
With open models, I have defense. I can locally 5x my productivity without gatekeepers. This means that if a villain does 5x, I do 5x too and they don't have asymmetrical benefit. I can also modify these open weights to be more productive; for example, Salesforce took llama2, finetuned it, and made smaller models as effective in tool calling as the huge closed weight ones.
OK, you handle all the counter measures for all the villains out there. Scam Altman is not the only one scamming along, hoovering up the rubes’ money life resources.
No open weights, no counterbalance to the villains.
OK, I will take your word for no further damages, if I can hang anyone causing the damages with their open weighted, modified by villains AIs that are fully delusional and hallucinating when answering questions. Also to be able to take care of those doing the countermeasures when they fail in their accepted duties. If you want the responsibility for the countermeasures, accept the responsibility fully.
reply
I didn't say no further damages. What I said is you need open weights to defend against a villain because otherwise you have no weapons and they will have asymmetrical tech capabilities over you.
If you want someone to take responsibility over your safety against a third party you're going to have to contract them. Everything you get for free comes with zero warranty, including advice you didn't pay for. I'm personally not an insurance or security company, we can definitely develop best practices here on SN if we collectively want to.
reply