top of page

Glossary

Clarity. Trust. Accuracy.

Explore to learn all the must-know definitions of Responsible AI & AI Governance.

  1. EU AI Act: Artificial Intelligence Act. A comprehensive legal framework proposed by the European Commission to govern safe and trustworthy AI systems within the common market.

  2. Algorithmic bias, see Bias.

  3. ALTAI: The Assessment List for Trustworthy Artificial Intelligence. A practical tool issued by the European Commission to help businesses and organisations self-assess their AI systems’ trustworthiness under development.

  4. Artificial Intelligence (AI): A non-human program or model that can solve sophisticated tasks.

  5. Artificial Neural Networks (ANN): A type of model for AI inspired by the neural network configurations of the human brain.

  6. Bias: Stereotyping, prejudice or favouritism towards some things, people or groups over others. It can be led by a systematic error in a sampling or reporting procedure, or prejudiced hypotheses made when designing AI models.

  7. Classification: The process of distinguishing between two or more discrete classes already labelled by humans.

  8. Clustering: The process of grouping related examples without existing labels.

  9. Concept drift: The case where the statistical properties of the target variable, which the model is trying to predict, change over time in unforeseen ways. This causes problems because the predictions become less accurate as time passes.

  10. Data creep: The case where AI models seek to incorporate more data and/or different data sources to improve the model’s predictive power.

  11. Deep learning (DL): A subset of machine learning, which is essentially a neural network with three or more layers – the input and output layer, and at least one hidden level in between. Modern DL models will have thousands or even millions of hidden layers.

  12. Industry-focused Auditing (IFA): IFA is a GRC mechanism that allows organisations to operationalise their compliance commitments and validate claims made about their adherence to the EU AI Act.

  13. Expert system: A system that uses AI technology to simulate the judgement and behaviour of a human or an organisation that has expert knowledge and experience in a particular field. Expert systems are generally rule based or deterministic.

  14. Explainability: A set of processes and methods that enables human users to comprehend and trust the results and output created by machine learning algorithms.

  15. GDPR: The General Data Protection Regulation. A European Union law on data protection and privacy.

  16. Governance, Risk and Compliance (GRC): GRC is the integrated collection of capabilities that enable a business to reliably achieve objectives, address uncertainty, and act with integrity — to achieve Principled Performance.

  17. HLEG: The High Level Expert Group on AI. A group of experts appointed by the European Commission to provide advice on its artificial intelligence strategy.

  18. Hyperparameter: A parameter set to control the learning process, established by the model designer and not learned by the model from data. These parameters can directly how well a model trains.

  19. Interpretability: The ability to explain or present a machine learning model’s reasoning in terms understandable to a human.

  20. Machine Learning (ML): A subset of AI, which builds (trains) a predictive model from input data. Therefore, these AI systems are probabilistic.

  21. Model creep, see Model drift.

  22. Model drift: The degradation of model performance due to changes in data and relationships between input and output variables.

  23. Neural Nets, see Artificial Neural Networks.

  24. Parameter: A variable of a model that the machine learning system learns on its own.

  25. Prediction: A model’s output when provided with an input example.

  26. Principled Performance: The capabilities that integrate the governance, management, and assurance of performance, risk, and compliance activities.

  27. Privacy violation: The accessing or sharing of information without permission.

  28. Production model: A machine learning model that has been launched into operation after being successfully trained and evaluated.

  29. Protected variable: The features that may not be used as the basis for decisions, such as race, religion, national origin, gender, marital status, age and socioeconomic status.

  30. Recommender system: A system that selects for each user a relatively small set of desirable items from a large corpus of possible options that are most likely to meet the requirements of that user. 

  31. Regression: A type of model that outputs continuous values.

  32. Reinforcement Learning (RL): A family of algorithms that learn an optimal policy, whose goal is to maximise return when interacting with an environment.

  33. Replication, see Explainability.

  34. Scope creep: The case where a model expands during development to incorporate more variables and/or data, yet fails to secure consent for personal data to be used for that given purpose, even if the organisation has rightfully obtained the data in the first place.

  35. Supervised Learning (SL): Training a model from input data and its corresponding labels.

  36. Testing: A final, real-world check using a dataset unseen by the machine learning algorithm to confirm that it was trained effectively.

  37. Tuning: A trial-and-error process by which some hyperparameters are changed, and the algorithm is run on the data again. Its performance is then compared with the validation set to determine which set of hyperparameters results in the most accurate model.

  38. Unsupervised learning (USL): Training a model to find patterns in an unlabelled dataset.

  39. Validation: A process used to evaluate the quality of a model using a different subset or subsets of the data, other than the training data.

  40. X-AI, see Explainability.

bottom of page