8th MIWP-5 sub-group meeting

Wednesday, 16 March 2016, 10:00-12:00 CET

Connection details

Recording: MP4AdobeConnect player


[10:00-10:30] Planning of the comment resolution process (until 8 April)

  • Assignment of reviewers
  • Tools + methods
  • Comment resolution telecon (week of 11-15 April)

[10:30-11:30] Discussion of critical cross-cutting comments

[11:30-12:00] Design report comments

  • Discuss any comments
  • Inform about the next steps in the implementation of the validator



Sven Böhme, Markus Seifert (DE), Paul Hasenohr, Chritian Ansorge (EEA), Tim Duffy, Peter Parslow (UK), Carlo Cipolloni, Antonio Rotundo, Stefania Morrone (IT),  Alejandra Sanchez (ES), Paul van Genuchten (NL), Iurie Maxim (RO), Daniel Cocanu (RO), Jon Herrmann (interactive instruments/ARE3NA), Emidio Stani (PwC/ARE3NA), Vlado Cetl, Michael Lutz (JRC)

Planning of the comment resolution process

  • 350+ comments received during the MS consultation of the ATS (#2685)
    • around 100 from MS and 250+ from JRC and the ARE3NA contractors (PwC/ii) developing the INSPIRE testing framework
  • Comment resolution process
    • Issues that need further discussion will be addressed in a comment resolution web-conference in the week of 11-15 April -
    • Github will be used for the comment resolution process
      • [Action] PwC/ii to provide detailed guidance on how to use Github for the comment resolution
    • MIWP-5 and JRC to propose resolutions for PwC/ii comments
      • Metadata: Paul van Genuchten, Antonio Rotundo, Alejandra Sanchez 
        • [Action] Michael to check with MIWP-8 sub-group for further volunteers (if necessary)
      • Discovery service:  Peter Parslow
      • WMS: Tim Duffy
      • WFS: Tim Duffy, Thijs Brentjens
      • Atom: Thijs Brentjens, Peter Parslow
      • JRC: ...
    • PwC/ii to propose resolutions for MS and JRC comments
  • Timetable
    • Resolution of comments using the Github issue tracker (PwC/ii, JRC & MIWP-5 members): 08/04/2016
    • Comment resolution web-conference: week of 11-15/04/2016

Discussion of critical cross-cutting comments

ATS review procedures


From AT's point of view a  theoretical revision of ATS make little/no sense. Checking whether the formulation of the ATS is precise enough to minimize the degrees of freedom within the implementation should be done during  the implementation process of the ATS. Relevant shortcomings could not be detected in a theoretically way.

The process of revision should be coordinated in some way. In terms of that not the whole INSPIRE community has to review each single ATS.  To guarantee a time efficient reviewing the work should be spitted within the community.

At this moment Austrian experts have not contributed in the revision process of ATS based on the above mentioned issues. But the Austrian experts are willing to do some related work during the implementation of our national validation framework. Austria has started this work with the beginning of this year. A first prototype should be available in the mid of summer.

Austria can offer the implementation and revision of some ATS where shortcomings were identified during the common revision process.


  • One of the main arguments behind setting up the action was to agree (on an abstract level) on the tests to be executed.
  • The large number of comments from PwC/ii already reflect the review from an implementation perspective.
  • It's important to release early and often during the development of the validator, in order to allow stakeholders to test the system and provide comments on this basis (rather than abstract specifications).
  • Some comments will still only come in once the validator has been officially released. We have to put a procedure in place that also allows comments and change proposals at a later stage.

Testing conformance against third party specifications


(Excel rows 5, 18, 53, 65, 68, 69, 332, etc.)

If a third party conformance class is referenced, an external ATS and/or ETS should be referenced (not just a spec) or the ATS for that specification would need to be specified by INSPIRE, too, if a test is considered necessary. In the case of OGC standards, such test should be provided as part of the OGC CITE tests and not as part of the INSPIRE Test Framework.


  • It was agreed that the reference should be as precise as possible.
  • It was suggested to rephrase the recommendation as follows: "If third party specifications are referenced, therein defined ATS (or relevant part of it) as well as available ETS should be referenced. Only additional tests (if any) to those already provided by third party ATS should be addressed/tested by INSPIRE. Fulfilment of requirements related to OGC standards should be tested by means of freely available OGC CITE testing framework, appropriately referenced (e.g. by means of REST APIs) by the INSPIRE Test Framework."
  • It’s advisable to start a cooperation with third parties so that INSPIRE-specific needs related to third party requirements (and implementation of relevant tests) can be discussed and eventual issues solved, as was e.g. done in the case of the eENVplus Validation Service customising OGC GML test suite.



(Excel rows 6, 8, 10, 11, and specific comments on test cases)

Consistent terminology, structure, dependencies and content is necessary, but currently not used.


  • the proposed more precise terminology should be used;
  • dependencies should include only other tests or conformance classes;
  • a distinction should be made between dependencies and conditional tests; the latter should be either expressed through a separate condition specifying when the test has to be executed (e.g. "when there is more than one language") or (if there are not so many cases of conditional tests) as part of the test description.

Scope of the ATSs (automated vs manual vs not testable)


(Excel row 12)


  • Manual tests should be included in the ATSs.
    • According to ISO 19105 definition, an abstract test case – and therefore the ATS which is a set of abstract test cases - is independent of the implementation. Therefore, from an ATS point of view, it does not matter if a test can be automated or not.
    • More specifically, ISO 19105 foresees both automated and manual tests (Clause 7: test methods). "Manual testing may be required when automated testing is too complex and/or human judgement is required".
  • Manual tests could be a "skipped" test in the ETS test report... with some additional instructions. on how to conduct the test manually
  • The distinction between manual vs. not testable was not discussed
    • The meaning of "not testable test" is unclear

Scope of the review


The review call did not request comments on sds or download-qos, but there are comments

Discussion: The comments will be considered in the resolution process, but with lower priority.

Design report comments

Comment (from Ilkka):

I'm liking the document structure and diagrams. However, I would like to see  sequence diagrams about the most important use cases as early in the design phase as possible.

Jon confirmed that PwC/ii are working on the addition of sequence diagrams in the report in order to describe the behaviour of the testing framework.