Published: 26 April 2023
Using NLTs responsibly is challenging due to the constant evolution of the responsible AI discipline and of NLTs. Applications and software engineering leaders should take the actions covered in this research to mitigate ethical, liability and social risks arising from the application of NLTs.
Included in Full Research
The responsible AI framework is becoming more important and gets better understood by vendors, buyers, society and legislators, as the requirements of the general public and authorities for the responsible use of natural language technologies (NLTs) are becoming more demanding.
As NLT-enabled solutions are evolving rapidly and further hyped by generative AI, leaders and employees feel urged to leverage such technologies to gain competitive advantage or improve efficiency, often neglecting the security and privacy risks they imply.
Due to its unstructured and casual adoption, responsible AI tooling for “bias mitigation” or “explainability enhancement” creates a false sense of security in
To view the entire document, log
in or purchase