Key AI principles for companies introducing Generative AI or LLMs into the workplace

Whilst Generative AI offers benefits to organisations in a large and broad range of areas its critical to understand that the technology comes with significant risks.

4 Critical AI Principles to put in place early that will dramatically reduce your risk are as follows

1. Establish an understanding and processes around bias and ensuring all AI technologies used by the company are without bias.

2. Ensure technologies you consider are robust and secure (and verify this don’t just trust vendors)

3. Ensure any models you use directly or indirectly with customers are explainable. If your not already legally required to do this, its best to ensure you do it anyway.

4. Establish data linage so you know exactly where your model information comes from.

The first principle dealing with bias is by far one of the biggest issues currently being exposed with generative AI and also more broadly with most Large Language Models (LLMs).  Models can easily becomes discriminatory, in many cases the trained data is the source of the problem because understanding how to counter bias is actually a difficult problem.  Bias is the last thing you want customers to be exposed to such as a facial recognition system failing to identify certain ethical backgrounds or a chat bot telling your customers to call a phone number when the chat agent is writing to someone with a voice or hearing disability.

The second focus on robustness and security is a little harder to achieve but essentially is about ensuring you have the right people working with you that can assess the quality of the solution architecturally with a strong focus on security and reliability. Never let the solution be completely outsourced.

The third focus is often debated, as a purists view here is that the most effective models are often “non deterministic” models, anything that models how the mind works such as neural networks effectively are not possible to explain and if you force a model to attempt a “chain of thought” process the outcome is likely to be completely different to a process that doesn’t attempt to create a rational way of machine thinking.  But the problem is in many business models it is often is necessary to explain in court why a decision was made when a customer is impacted. This is especially true of legal firms, financial service companies such as banks or health, but can also be necessary in other industries.  Very few AI techniques can be well explained which is why classical models are usually the preferred approach to AI in those industries.  For those industries BRMS solutions and classical classification systems that regulators understand are king.

Finally, data linage, which is a terms those in finance will be very familiar with, is about clearly showing where the model is sourcing its information.  This is critical because when a decision is made you need to be able to show the information it uses is true and accurate.   Models like ChatGPT use public data collected on the internet that is not only filled with bias, it also can contain large amounts of factually wrong data.  If you use these models to make decisions then you as a company could be liable for the consequences that has to your customers, blaming the model isn’t likely to go far as it is common knowledge those models contain a large amount of data information.   (My recent experiments with ChatGPT 5 shows it to be 9 times out of 10 wrong in performing operations such as basic code generation) Anyway focusing on the data linage will typically lead companies to look at more specialised models for their business where they have more control over the quality of information used to feed the models. Once that level of control is in place the models can then be used with customers.