Listed below are the fifteen minimum viable metrics proposed by FAIRsFAIR for the systematic assessment of FAIR data objects1  and detailed in the report FAIRsFAIR Data Object Assessment Metrics. These metrics are based on indicators proposed by the RDA FAIR Data Maturity Model Working Group, on the WDS/RDA Assessment of Data Fitness for Use checklist, and on  prior work conducted by project partners such as FAIRdat and FAIREnough.

To be inclusive of current data practices, we will refine and revise the metrics through several iterations and also through two ongoing use cases on FAIR data assessment. As you read and comment on these metrics please bear in mind the following: 

  • In the FAIR ecosystem, FAIR assessment must go beyond the object itself. FAIR enabling services and repositories are vital to ensure that research data objects remain FAIR over time. 
  • Automated testing depends on clear, machine assessable criteria. Some aspects (rich, plurality, accurate, relevant) specified in FAIR principles still require human mediation and interpretation. 
  • Until domain/community-driven criteria such as schemas and usage elements have been agreed, the tests must focus on generally applicable data/metadata characteristics. 

The metrics are presented in brief below and we welcome your suggestions and remarks before 30 September 2020.

Please comment in the space provided citing in the subject line the Metric Identifier No. you are referring to (e.g. FsF-R1.3-01M).

Alternatively, you can also read & comment directly on the full report here. Your valuable feedback will be used to refine and improve the metrics.

1While FAIR principles may apply to any digital objects, we are concerned with the subset of digital objects: research data that are collected, measured, or created for purposes of scientific analysis.

FsF-F1-01D - Data is assigned a globally unique identifier


A data object may be assigned with a globally unique identifier such that it can be referenced unambiguously by humans or machines. Globally unique means an identifier should be associated with only one resource at any time. Examples of unique identifiers of data are Internationalized Resource Identifier (IRI), Uniform Resource Identifier (URI) such as URL and URN, Digital Object Identifier (DOI), the Handle System,, and Archival Resource Key (ARK). A data repository may assign a globally unique identifier to your data or metadata when you publish and make it available through its curation service.

FsF-F1-02D - Data is assigned a persistent identifier


In this specification, we make a distinction between the uniqueness and persistence of an identifier. An HTTP URL (the address of a given unique resource on the web) is globally unique, but may not be persistent as the URL of data may be not accessible (link rot problem) or the data available under the original URL may be changed (content drift problem). Identifiers based on, e.g., the Handle System, DOI, ARK are both globally unique and persistent. They are maintained and governed such that they remain stable and resolvable for the long term. The persistent identifier (PID) of a data object may be resolved (point) to a landing page with metadata containing further information on how to access the data content, in some cases a downloadable artefact, or none if the data or repository is no longer maintained. Therefore, ensuring persistence is a shared responsibility between a PID service provider (e.g., datacite) and its clients (e.g., data repositories). For example, the DOI system guarantees the persistence of its identifiers through its social (e.g., policy) and technical infrastructures, whereas a data provider ensures the availability of the resource (e.g., landing page, downloadable artefact) associated with the identifier.

FsF-F2-01M - Metadata includes descriptive core elements (creator, title, data identifier, publisher, publication date, summary and keywords) to support data findability


Metadata is descriptive information about a data object. Since the metadata required differs depending on the users and their applications, this metric focuses on core metadata. The core metadata is the minimum descriptive information required to enable data finding, including citation which makes it easier to find data. We determine the required metadata based on common data citation guidelines (e.g., DataCite, ESIP, and IASSIST), and metadata recommendations for data discovery (e.g., EOSC Datasets Minimum Information (EDMI), DataCite Metadata Schema, W3C Recommendation Data on the Web Best Practices and Data Catalog Vocabulary).

This metric focuses on domain-agnostic core metadata. Domain or discipline-specific metadata specifications are covered under metric FsF-R1.3-01M. A repository should adopt a schema that includes properties of core metadata, whereas data authors should take the responsibility of providing core metadata.

FsF-F3-01M - Metadata includes the identifier of the data it describes


The metadata should explicitly specify the identifier of the data such that users can discover and access the data through the metadata. If the identifier specified is persistent and points to a landing page, the data identifier and links to download the data content should be taken into account in the assessment.

FsF-F4-01M - Metadata is offered in such a way that it can be retrieved by machines


This metric refers to ways through which the metadata of data is exposed or provided in a standard and machine-readable format. Assessing this metric will require an understanding of the capabilities offered by the data repository used to host the data. Metadata may be available through multiple endpoints. For example, if data is hosted by a repository, the repository may disseminate its metadata through a metadata harvesting protocol (e.g., via OAI-PMH) and/or a web service. Metadata may also be embedded as structured data on a data page for use by web search engines such as Google and Bing or be available as linked (open) data.

FsF-A1-01M - Metadata contains access level and access conditions of the data


This metric determines if the metadata includes the level of access to the data such as public, embargoed, restricted, or metadata-only access and its access conditions. Both access level and conditions are necessary information to potentially gain access to the data. It is recommended that data should be as open as possible and as closed as necessary. 

  • There are no access conditions for public data. Datasets should be released into the public domain (e.g., with an appropriate public-domain-equivalent license such as Creative Commons CC0 license) and openly accessible without restrictions when possible. 
  • Embargoed access refers to data that will be made publicly accessible at a specific date. For example, a data author may release their data after having published their findings from the data. Therefore, access conditions such as the date the data will be released publically is essential and should be specified in the metadata.
  • Restricted access refers to data that can be accessed under certain conditions (e.g. because of commercial, sensitive, or other confidentiality reasons or the data is only accessible via a subscription or a fee). Restricted data may be available to a particular group of users or after permission is granted. For restricted data, the metadata should include the conditions of access to the data such as point of contact or instructions to access the data
  • Metadata-only access refers to data that is not made publicly available and for which only metadata is publicly available.
FsF-A2-01M - Metadata remains available, even if the data is no longer available


This metric determines if the metadata will be preserved even when the data they represent are no longer available, replaced or lost. Similar to metric FsF-F4-01M, answering this metric will require an understanding of the capabilities offered, data preservation plan and policies implemented by the data repository and data services (e.g., Datacite PID service). Continued access to metadata depends on a data repository’s preservation practice which is usually documented in the repository’s service policies or statements. A trustworthy data repository offering DOIs and implementing a PID Policy should guarantee that metadata will remain accessible even when data is no longer available for any reason (e.g., by providing a tombstone page).

FsF-I1-01M - Metadata is represented using a formal knowledge representation language


Knowledge representation is vital for machine-processing of the knowledge of a domain. Expressing the metadata of a data object using a formal knowledge representation will enable machines to process it in a meaningful way and enable more data exchange possibilities. Examples of knowledge representation languages are RDF, RDFS, and OWL. These languages may be serialized (written) in different formats. For instance, RDF/XML, RDFa, Notation3, Turtle, N-Triples and N-Quads, and JSON-LD are RDF serialization formats.

FsF-I1-02M - Metadata uses semantic resources


A metadata document or selected parts of the document may incorporate additional terms from semantic resources (also referred as semantic artefacts) that unambiguously describe the contents so they can be processed automatically by machines. This metadata enrichment may facilitate enhanced data search and interoperability of data from different sources. Ontology, thesaurus, and taxonomy are kinds of semantic resources, and they come with varying degrees of expressiveness and computational complexity. Knowledge organization schemes such as thesaurus and taxonomy are semantically less formal than ontologies.

FsF-I3-01M - Metadata includes links between the data and its related entities


Linking data to its related entities will increase its potential for reuse. The linking information should be captured as part of the metadata. A dataset may be linked to its prior version, related datasets or resources (e.g. publication, physical sample, funder, repository, platform, site, or observing network registries). Links between data and its related entities should be expressed through relation types (e.g., DataCite Metadata Schema specifies relation types between research objects through the fields ‘RelatedIdentifier’ and ‘RelationType’), and preferably use persistent Identifiers for related entities (e.g., ORCID for contributors, DOI for publications, and ROR for institutions).

FsF-R1-01MD - Metadata specifies the content of the data


This metric evaluates if the content of the dataset is specified in the metadata, and it should be an accurate reflection of the actual data deposited. Examples of the properties specifying data content are : resource type (e.g., data or a collection of data), variable(s) measured or observed, method, data format and size. Ideally, ontological vocabularies should be used to describe data content (e.g., variable) to support interdisciplinary reuse.

FsF-R1.1-01M - Metadata includes license information under which data can be reused


This metric evaluates if data is associated with a license because otherwise users cannot reuse it in a clear legal context. We encourage the application of licenses for all kinds of data whether public, restricted or for specific users. Without an explicit license, users do not have a clear idea of what can be done with your data. Licenses can be of standard type (Creative Commons, Open Data Commons Open Database License) or bespoke licenses, and rights statements which indicate the conditions under which data can be reused. 

It is highly recommended to use a standard, machine-readable license such that it can be interpreted by machines and humans. In order to inform users about what rights they have to use a dataset, the license information should be specified as part of the dataset’s metadata.

FsF-R1.2-01M - Metadata includes provenance information about data creation or generation


Data provenance (also known as lineage) represents a dataset’s history, including the people, entities, and processes involved in its creation, management and longer-term curation. It is essential that data producers provide provenance information about the data to enable informed use and reuse. The levels of provenance information needed can vary depending on the data type (e.g., measurement, observation, derived data, or data product) and research domains. For that reason, it is difficult to define a set of finite provenance properties that will be adequate for all domains. Based on existing work, we suggest that the following provenance properties of data generation or collection are included in the metadata record as a minimum. 

  • Sources of data, e.g., datasets the data is derived from and instruments
  • Data creation or collection date 
  • Contributors involved in data creation and their roles 
  • Data publication, modification and versioning information


There are various ways through which provenance information may be included in a metadata record. Some of the provenance properties (e.g., instrument, contributor) may be best represented using PIDs (such as DOIs for data, ORCIDs for researchers). This way, humans and systems can retrieve more information about each of the properties by resolving the PIDs. Alternatively, the provenance information can be given in a linked provenance record expressed explicitly in, e.g., PROV-O or PAV or Vocabulary of Interlinked Datasets (VoID).


FsF-R1.3-01M - Metadata follows a standard recommended by the target research community of the data 


In addition to core metadata required to support data discovery (covered under metric FsF-F2-01M), metadata to support data reusability should be made available following community-endorsed metadata standards. Some communities have well-established metadata standards (e.g., geospatial: ISO19115; biodiversity: DarwinCore, ABCD, EML; social science: DDI; astronomy: International Virtual Observatory Alliance Technical Specifications) while others have limited standards or standards that are under development (e.g., engineering and linguistics). The use of community-endorsed metadata standards is usually encouraged and supported by domain and discipline-specific repositories. 

FsF-R1.3-02D - Data is available in a file format recommended by the target research community


File formats refer to methods for encoding digital information. For example, CSV for tabular data, NetCDF for multidimensional data and GeoTIFF for raster imagery. Data should be made available in a file format that is backed by the research community to enable data sharing and reuse. Consider for example, file formats that are widely used and supported by the most commonly used software and tools. These formats also should be suitable for long-term storage and archiving, which are usually recommended by a data repository. The formats not only give a higher certainty that your data can be read in the future, but they will also help to increase the reusability and interoperability. Using community-endorsed formats enables data to be loaded directly into the software and tools used for data analysis. It makes it possible to easily integrate your data with other data using the same preferred format. The use of community-endorsed formats will also help to transform the format to a newer one, in case an older format gets outdated.

Please login & comment below citing in the subject line the Metric Identifier No. you are referring to - e.g. "FsF-R1.3-01M" 


Your comments and feedback are invited below


Submitted by EvgenyB on Thu, 06/08/2020 - 13:48

Dear FAIRsFAIR team

in some instances the metrics unpack the FAIR criteria further to reach dichotomic decision, which I support. Here, however, as in some other cases, this is not the case. I would recommend to unpack this metric further, or else I would not know how to apply it. One suggestion: (i) standard or bespoke license and (ii) machine-readability of license. In some instances I also see license information mentioned once for the whole repository rather than for every dataset. It is not good practice, and often it is confusing whether this relates to datasets or the database as such, but this could potentially be another metric before the other two.



Submitted by anusuriya on Thu, 06/08/2020 - 16:57

Hi Evgeny,

Thank you for your feedback on the data license. I agree with you on the poor practice of assigning a license to data catalog instead of datasets. 

The metric focuses on the license assessment at the data object level. In our assessment tool (, we have implemented two tests in relation to the license metric. The first test checks if the metadata of a dataset contains license information. the second test checks if it is a standard license (e.g., through spdx registry). The next test is (as you suggested) to verify if the license is represented in a way machine can understand, e.g., CC_REL, ODRL. I wil incorporate your suggestion in the metric specification. thanks again for your input.

Write a Reply or Comment

511 Read