What methods/practices for GenAI risk management or AI accountability SHOULD have been in Biden’s executive order on AI?
Sort by:
The EO should never have been issued in the first place. Regulation is futile. Even if we think it is a good idea, which it most definitely is not, there is a process in this country for passing laws and rulemaking. The EO bypassed that entirely, further fueling the dysfunction of government and ruling by fiat instead. The odds that we will be able to regulate China, Russia, and others are exactly zero. We have instead given them a fantastic gift and helped the established AI players achieve regulatory capture while throwing a wet blanket on the rest of the domestic market - especially for smaller players, new entrants, and those on the bleeding edge. The entire thing was massive overreach and is likely unconstitutional. Tragic mistake. (And imagine how these newly-minted powers might be used and abused by Biden's successor - consider the scenarios here carefully.)
It is still not clear if all companies will be required to submit AI security testing results with the government under this directive?
There should have been a review panel established to review the premise and architecture of the AI models themselves prior to being created to weigh the relative morals and ethics of the models existence. An ounce of prevention is worth a pound of cure.
A comprehensive framework could be established to assess, mitigate, and monitor risks associated with AI systems, especially GenAI which has broader capabilities. This framework could outline standards and best practices for risk assessment, monitoring, and mitigation.