The rise of Artificial Intelligence has put data and data quality at the core of digitalisation. At the same time there seems to be a need to better understand what is meant by data quality and how to ensure it is at hand. Our first attempt to start the discussion was made at the Data Space Symposium in Darmstadt in March 2024. In terms of contribution this report is an increment to the slideshow by giving further examples of how data quality is defined and put into use at the same time as highlighting the contextual properties of data quality and the need for human judgement. We do this from a policy perspective, i.e. grounding our analysis in regulations and standards. Our analysis starts with the legal reasonings on data quality found in the AI Act and the European Health Data Space regulation. Our ambition is not to be exhaustive, there are more EU regulations and directives to consider in relation to data quality than the ones we cover here – such as the directive on Copyright in the Digital Single Market that introduces the concept of text and data mining; the Data and Data Governance Acts that enable standardised formats for making data interoperable across services; and the Digital Service and Market Acts that define responsibilities in terms of making data and information available. Among others. The same goes for standards on data quality which span across specific domains and disciplines. Coming back to our ambition, we believe it is important to raise the question to what extent data quality can be automatically assessed, as this is an ambition floated at various events and foras within the data community. While we think this can be achieved for specific and narrow contexts, we argue that data quality is a topic that still requires judgement and the competence to make assessments on a case-by-case basis.