pull down to refresh
Thanks for tagging. Definitely an interesting thing to consider. I don't know enough about how feasible or difficult this is, and I'm not an expert on proofs in computer science / software, so take everything I say with a grain of salt.
One thing about proofs is that you often have to make simplifying assumptions to confirm proofs. You assume away real world frictions, like hardware malfunctions, concurrence problems, bit flips from gamma rays, etc. In econ, you assume away things like incoherent or extreme preferences, and you assume stability of preferences.
I think it sounds plausible for an AI to read a code, develop a formal mathematical model of it, and prove that its outputs satisfy certain conditions but only under some assumptions about the physical hardware environment and the timing of compute cycles or whatever. It could also probably prove certain game theoretic properties, though it might need to assume rational agents to do so. I have a much harder time seeing how an AI would be able to prove that something that takes physical reality, or arbitrarily irrational agents, into consideration, but I'm not sure.
The idea to formalize and link-up critical components in
leanusing LLMs like Amazon did for authorization, is quite compelling. This used to be cost-prohibitive. Now it may actually become a really great hard requirement to apply to an SBOM. 🤔ping@SimpleStacker (as the only stacker I've seen to formulate proofs of functional ideas they offer on SN): do you have any thoughts about this approach where, when properly integrated, a formal proof can make for example fuzzing better? Like what Amazon did: simultaneously fuzz the production code and the proof, and check for mutual determinism.