
News Link • Science, Medicine and Technology
To Share Weights from Neural Network Training Is Dangerous
• Activist Post - Brownstone InstituteSome organizations and researchers are sharing neural network weights, particularly through the open-weight model movement. These include Meta's Llama series, Mistral's models, and DeepSeek's open-weight releases, which claim to democratize access to powerful AI. But doing so raises not only security concerns, but potentially an existential threat.
For background, I have written a few articles on LLMs and AIs as part of my own learning process in this very dynamic and quickly evolving Pandora's open box field. You can read those here, here, and here.
Once you understand what neural networks are and how they are trained on data, you will also understand what weights (and biases) and backpropagation are. It's basically just linear algebra and matrix vector multiplication to yield numbers, to be honest. More specifically, a weight is a number (typically a floating-point value – a way to write numbers with decimal points for more accuracy) that represents the strength or importance of the connection between two neurons or nodes across different layers of the neural network.