FAIRsFAIR is working actively to produce recommendations on technologies that support semantic interoperability in a sustainable way, and practices that support FAIRness.

Specifically, we aim to:

  • Improve the semantic interoperability of research resources by specifying FAIR metadata schemas, vocabularies, protocols, and ontologies 
  • Provide solutions for interoperability requirements and machine accessibility for FAIR-aligned repositories
  • Formulate guidelines and recommendations for FAIR-enabling services
  • Assess to what extent the FAIR principles can be applied to research software

On this page you can find summary information about our key outputs in each of these areas. Click on the links to access comprehensive newspieces, the associated reports, and further reading material. 


Improve  the semantic interoperability of research resources by specifying FAIR metadata schemas, vocabularies, protocols, and ontologies

Based on studies of public information, especially EOSC infrastructure efforts, and on limited surveying and interviews, documented in D2.1 Report on FAIR requirements for persistence and interoperability 2019, FAIRsFAIR published a second D2.4 Report on FAIR requirements for persistence and interoperability was written specifically for researchers, data stewards, and service providers, as a guide the use of PIDs, metadata, and semantic interoperability. The third report, D2.10 3rd Report on FAIR requirements for persistence and interoperability, zoomed in on six specific aspect of FAIR implementation, that should be paid attention to. 

The Recommendations for FAIR Semantics proposes 17 recommendations related to one or more of the FAIR principles and 10 best practice recommendations to improve the global FAIRness of semantic artefacts. 

Associated webinars:

Associated videos:

What are Persistent Identifiers and why to use them? (2:42 min)

The importance of semantic artefacts to support FAIR research (2:30 min)

Associated information sheets:


Provide solutions for interoperability requirements and machine accessibility for FAIR-aligned repositories

The report 2.3 Set of FAIR data repositories features provides guidelines to enable repositories not only to host FAIR digital objects, but also to be FAIR themselves. The recommendations were collected in the workshop “Building the data landscape of the future: FAIR Semantics and FAIR Repositories” which took place in Espoo Finland in October 2019.

The non-technical requirements tabled in the report relate to service level and other agreements between users and repositories or communities and data providers. They  include amongst others:

  • Each repository to have a PID
  • Repositories to be listed in registries of repositories
  • Explicit data deletion policy to detail roles and responsibilities
  • Technical support to be provided for predefined file formats
  • Community standards and ontologies from public registries to be reused

The report provides a comprehensive list of technical features aimed at improving interoperability and grouped by category. Categories dealt with include: 

  •  Metadata for digital objects
  •  Machine-readable and interpretable metadata about repository itself
  •  PID policies 
  •  Data object and file requirements

Additional technical requirements are that repositories should acquire a machine-readable license and provide a search interface that enables findability.

A FAIRsFAIR reference implementation of a FAIR Data Point was presented in D2.6 1st reference implementation of the data repositories features and further testing and development were discussed in D2.9 2nd reference implementation of the data repositories features and client application. 

Associated webinar: FAIRification of Services – Two Examples

 

 


Formulate guidelines and recommendations for FAIR-enabling services

The Assessment report on FAIRness of services (D2.7)  proposes an assessment framework for the FAIRness of services. Aimed at a target audience of data service owners, the model contains concrete recommendations to improve different aspects of services. The report presented 50 recommendations on how services can be made optimally improve the FAIRness of the data that they are used for. The recommendations are divided into seven different aspects:

Technical & service provisioning aspects

  • SAF-F FAIR enablement 
  • SAF-Q Quality of service 
  • Aspect: SAF-O Open & Connected

Socially-oriented aspects 

  • Aspect: SAF-U User centricity
  • Aspect: SAF-T Transparency 
  • Aspect: SAF-L Longevity
  • Aspect: SAF-E Ethical & Legal 

 

Associated report M2.10 Report on basic framework on FAIRness of services 

Associated webinar: FAIRification of Services: Two Examples

Associated workshop: FAIR Certification of Repositories and other Data Services


 

Assess to what extent the FAIR principles can be applied to research software

The FAIRsFAIR extra milestone dedicated to software as a research output, M2.15 Assessment report on 'FAIRness of software' (October 16, 2020) presents the state-of-the-art of software in the scholarly ecosystem alongside 10 high-level recommendations for organisations seeking to define FAIR principles or other requirements for research software in the scholarly domain. 

Associated webinar: FAIR + Software: decoding the principles 

Associated blog post: Decoding the FAIR principles: are they relevant to software? 

 


Identify potential solutions for metadata catalogue integration

The emergence of an ecosystem where FAIR data reuse becomes the norm depends upon researchers’ ability to search for and find suitable data held across multiple repositories. For this to happen, repository and aggregator service providers must reach agreement on common metadata catalogue standards to support interoperability. Through a series of workshops, metadata catalogue integration challenges were explored with representatives of domain specific repositories which led to the D3.6 Proposal on integration of metadata catalogues to support cross-disciplinary FAIR uptake. During the subsequent pilot, the feasibility of Data Catalogue Vocabulary (DCAT) v2 was assessed from both the domain specific and aggregator perspectives. The results of the pilot are presented in D3.7 Report on integration of metadata catalogues. Key findings include:

  • There was a positive attitude to DCAT’s potential to support catalogue integration and no major technical obstacles were identified
  • The key challenge hindering uptake appears to be related to a current lack of demand for aggregator services to implement DCAT harvesting from the repositories they serve and a lack of access to a central documented collection of metadata mappings
  • The pilot suggests that DCAT and Data Documentation Initiative- Cross Domain Integration (DDI-CDI) should not be viewed as competing but rather complementary standards with DCAT addressing discoverability at the collection level and DDI-CDI addressing interoperability at the dataset level

Further reading:

 

 

8,017 Read