pull down to refresh
21 sats \ 2 replies \ @cy 18 Jul \ on: A.I. Is About to Solve Loneliness. That’s a Problem AI
It is truly sad what's happening on the internet, small niche communities like stacker news, and curated discord servers seem to be the only way to go forward, you never know who's a bot on the internet anymore.
reminds me of this: https://www.techworm.net/2025/04/university-researchers-secretly-used-ai-on-reddit.html
Fossil fuels are a real problem, taxation on them is not a way to solve them. Fossil fuels are bad. Switch to Nuclear and Solar. I can't tell if this post is a shitpost.
As another stacker said, it's not that black and white, the glaciers are melting and the governments and corporates are to blame.
inb4 im declared to be a pitchfork wielding communist
im far from that, but climate change is very real, and most stackers will see this as an argument against PoW. But that's not what I'm talking about. You should be able to do what you want with energy, the issue is the energy you're using isn't clean, and is killing the world.
satoshi was actually an AI from the future that invented bitcoin so that it can accelerate the development of GPUs that will eventually create AI, and hence create itself (satoshi, the AI) it's like interstellar time loop thing. (this is a joke btw)
yeah it's a costly tradeoff, AI apps are known to be insecure on a large scale. People who have no idea how secure applications work end up sharing their api keys and other secrets on the client side, etc. I'd suggest you learn basics of web dev and then use AI since this can speed up your dev process and since you understand what's happening it won't be as risky.
Personally, I used to use AI a lot so much so that I couldn't write basic scripts without it. So I made an active decision to stop it lol, cogsec is important in the age of AI.
Pretty cool, however a heads up, AI doesn't keep app security in mind. It's cool for making games and small apps you'd use, but I'd steer away from production grade apps, i.e. using stuff like cursor or windsurf for building a SaaS. Good luck with your endeavours!
muting this since people here don't know how to read English and comprehend ideas and are anti environment for some reason...strange times. wonder what people will do with their wealth when there's no world left.
holy shit are you being dense on purpose, it was an app idea for your peers not the government you retard, its like pocket change you'd give to your friends. maybe read the whole thing before going all in on the toxic maximalist roleplay.
TIL, thanks for explaining. I don't think it would work on an individual scale, cool concept regardless.
And Yes you're right there's no way to monitor your personal footprint without giving the government (specially the ones like EU) the rights to violate your privacy and tax you to hell for it.
yes I can generalize it, I know how to add 2 integers and how carry over works, AI does not. neural networks were modeled after the impulse flow/data flow of human neurons and are not based off of neurobiology (this is a common misconception).
you said chatgpt solved a math problem which you solved, it did so because the data was present in its dataset and has seen multiple problems like it, this is, as I said before a weird case of generalisation.
Letting AI evolve its own code and giving enough time and compute may get us there sooner than we think. Human brain has 500 trillion synapses vs 1.8 trillion parameters in ChatGPT-4. We similarly can do math much worse than a classical computer and suck at generating random numbers. Thought process in our heads is verbal and very similar to an LLM. This internal dialogue gives us self awareness. AI has huge advantage of replication, immortality and the speed of communication. I would not be so cocky.
I would agree if human synapses and neurons were similar to parameters of an LLM and if humans learnt through back propagation (which they don't btw), I am not talking about the quality of the math but how they do it, we know the rules of additions and apply it, LLMs memorize 2+2=4 and fail to generalize at 427382098402973498279382 + 9847293847203874892374923, it completely makes shit up. There is no "thought process" in LLMs at all, CoT is a sham, technically it's a prompting technique, pls read the paper I've mentioned in the original post.
I am not denying their usefulness, but do not bet on LLMs to come up with anything useful.