pull down to refresh

Key passage: " (a) Artificial Intelligence must be safe and secure. Meeting this goal requires robust, reliable, repeatable, and standardized evaluations of AI systems, as well as policies, institutions, and, as appropriate, other mechanisms to test, understand, and mitigate risks from these systems before they are put to use." Emphasis mine. This is similar to the FDA regarding medicine. Big players end up paying the regulating agency to "evaluate" the proposed solution. Takes years and is not effective -- unless "effective" means protecting incumbents who can afford to lobby.