Do you have AI/ML principles to share? I will include my principles and standards in the comments. I'd like your feedback and to see yours, if you're willing to share.
Senior Data Scientist in Miscellaneous, 1,001 - 5,000 employees
These are the following principles from my personal point of view:1.) Keep your existing legacy approaches as a fall-back for AI/ML methods!
In case the AI/ML methods start to fail. Support members must have the possibility to switch to reliable (maybe less performant or optimal) solutions until the data scientist can start investigating an issue.
2.) Select the model, that fits to your problem and learn from failed attempts!
There is always the possibility, one exceeds the limitations and areas of validity of a certain modelling approach, for example when it comes to the application of a gaussian distribution to non-negative data.
3.) Never trust a model that you don’t understand and can’t explain!
The users won't trust it and start overruling the results. In addition, you may face being blamed for outcomes even when related to wrong input data.
4.) Clearly understand the results of your model and don't ignore undesirable outcomes!
Especially undesirable, but possible outcomes may hint you on overseen aspects.
5.) Explain your results in a clear manner, minimizing the risk of false conclusions!
This helps also users to ensure that they will use a model as intended and specified.
Enterprise Architect Expert in Energy and Utilities, 1,001 - 5,000 employees
Great feedback, Markus! Thanks so much.
Engineering Manager in Energy and Utilities, 1,001 - 5,000 employees
YesCo-founder/Practicum Director in Travel and Hospitality, 2 - 10 employees
I only have one Principle but I don't deny it is the most difficult, ensuring that the value of the project is greater than the cost of implementing and maintenance.Data Scientist & Analytics Manager, Self-employed
In addition to below the key principles of AI/ML include continual monitoring and adaptation, responsible data handling, bias detection and mitigation, human-AI collaboration, explainability and interpretability, robustness and resilience, compliance and regulatory considerations, ethical decision-making, continuous learning and improvement, and accountability and responsibility. These principles emphasize the importance of ongoing evaluation, ethical data practices, fairness, collaboration between humans and AI, transparency in model outputs, resilience to unexpected scenarios, adherence to regulations, ethical considerations, a culture of learning, and taking responsibility for AI-related decisions. By adhering to these principles, organizations can foster trust, mitigate biases, ensure compliance, and make ethical and informed use of AI/ML technologies.Content you might like
Yes82%
No18%
100 PARTICIPANTS
Redefining business goals24%
Optimizing current business goals62%
Setting additional business goals12%
Other0%
254 PARTICIPANTS
Head of Information and Data Analytics in Software, 5,001 - 10,000 employees
There is no doubt that AI and Machine Learning have had a tremendous impact on business. It fact, it is being applied to almost every industry there is. Here are some quick examples of how AI/ ML have created business ...read more
Artificial Intelligence (AI) only makes very low-risk decisions.
It does not make decisions that involve risk to safety, grid reliability, compliance, ethics, or financial results. AI cannot replace human judgment.
AI cannot be held accountable for mistakes, so it cannot be responsible for making decisions. The developer and the company are thus responsible for our AI outputs. Work generated by AI must be reviewed and validated for risk. Only the lowest risk decisions can be left to AI without human judgment, and even these models should be constantly reviewed and refined. (e.g., Should we water the grass today? How should we optimize this route? Should we create a work order?)
For generative AI models, training data quality must match criticality of AI output
It is critical to avoid using outdated or incorrect data to train models which enable our work. Applying this principle means that we will very carefully consider the scope of data to be used for training generative models with input from business partners, architects, and data platform stakeholders.
AI & Machine Learning tools must be tested and validated by humans
Just like any other software, Machine Learning models must be validated by humans and tested according to the risk they present to the organization.
AI & Machine Learning tools must never compromise our company’s values and beliefs
AI presents unique ethical hazards which we have not encountered in the same form. We build AI models that conform to our values and beliefs, and vet these models for ethical risk.
AI & Machine Learning tools must be governed to ensure they are complying with our company’s AI/ML Principles and standards
**AI/ML Standards**
Proprietary data must not be fed into public AI models as prompts for generating output.
It’s fine to use public AI models to generate information using generic, public information or Public Information. Our Members and contractors are forbidden from sharing any other category of our company’s information with software as a service providers with whom we do not have a contract. See [links to policies]
Outputs generated from public AI models can only be used internally.
The copyright status of AI model outputs is still in limbo. Our company must not present AI-generated content as our company-produced work.
Generative AIs must not add AI responses to their own or other AIs training corpuses