INSPIRE Workshop on validation and conformity testing, 15-16 May 2014

15-16 May 2014, Ispra (Italy)
Building 26a, Leonardo & Michelangelo meeting rooms

Agenda

Thursday, 15 May 2014

09:00 - 09:15 Welcome and approval of the agenda

09:15 - 09:30 MIWP-5 overview (Daniela Hogrebe)

09:30 - 10:30 Discussion on scope of validation and testing

  • definition of terms (Daniela Hogrebe)
  • a short reminder on previous discussions in Istanbul, 2012 (Marc Leobet)
  • overview of the status of data specifications, abstract test suites (Robert Tomas)
  • eENVplus (Giacomo Martirano)
  • data testing (Markus Seifert)

10:45 - 11:15 COFFEE BREAK

11:15 - 12:15 Discussion on scope of validation and testing

  • relation to MIWP-16 (Paul Hasenohr)
  • agreement on scope
  • agreement on project plan (Daniela Hogrebe)

12:15 - 13:30 LUNCH (Mensa)

13:30 - 15:00 Discussion on requirements for a common validator

  • break-out-groups: discussion on use cases

15:00 - 15:30 COFFEE BREAK

15:30 - 16:45 Discussion on requirements for a common validator

  • requirements based on use cases
  • further requirements (non-functional)

16:45 - 17:00 Potential support from ARE3NA (Robin Smith)

17:00 - 17:30 Conclusions and wrap-up first day

20:00 SOCIAL DINNER

Friday, 16 May 2014

09:00 - 09:15 Welcome, approval of the agenda and wrap-up first day

09:15 - 10:15 Presentation of existing solutions for validation and testing (metadata and services)

  • INSPIRE Geoportal validator (JRC, Sebastian Goerke)
  • OGC TEAM Engine (Sebastian Goerke)
  • Testsuite (Michael Schulz)

10:15 - 10:45 COFFEE BREAK

10:45 - 12:15 Presentation of existing solutions for validation and testing (metadata and services)

  • French validation for metadata: a State experience for monitoring (Etienne Taffoureau)
  • Polish metadata validation tool (Marcin Grudzien)
  • Spain-SDI validation service and metadata (Alejandra Sánchez Maganto)
  • ESDIN Test Framework (Thijs Brentjens)
  • Spatineo validator (Ilkka Rinne)
  • Neogeo-online (Guillaume Sueur)

12:15 - 13:30 LUNCH (Mensa)

13:30 - 14:30 Wrap-up metadata and service testing

14:30 - 15:00 COFFEE BREAK

15:00 - 16:00 Wrap-up workshop and next steps

DRAFT Minutes

Preface

The minutes are based on the notes taken by the scribes Michael Schulz, Michael Lutz and Daniela Hogrebe.

Where "+1" is used in the minutes, this is to indicate support for the position of the previous speaker.

Actions

Actions are indicated in the minutes using the keyword [Action] and are summarised in the table below.

No. Action Redmine issue Responsible Due Done
1 .. .. .. dd/mm/yyyy
2 Check Read (/Write) access to Wiki/Documents for workshop members .. JRC 02/06/2014
3 Draft a template for a requirements analysis based on use cases .. Giacomo, Michael S. 30/06/2014
4 Document the differences between French validator and MD TG requirements .. Etienne, Angelo 30/06/2014
5 Clarify with Luis Bermudez (OGC) OGC certification for "INSPIRE profiles" .. Sebastian 30/06/2014
6 Discuss with OGC (Athina, Luis, ...) cooperation issues .. Sebastian, Michael L. 30/06/2014
7 Review TG for metadata and classify the requirements based on template of data specs .. Paul, Etienne, Antonio, Michael O., Alejandra 31/07/2014
8 Review TG for services and classify the requirements based on template of data specs .. Angelo?, Michael L., Michael S., Thijs?, Ilkka?, Etienne? 31/07/2014
9 From ATS to ETS: explore ATS on how the tests could be implemented .. Angelo, Giacomo, Robert, Carlo, Markus, Darja 31/07/2014
10 Prepare an overview table with all existing validator solutions according to specific characteristics .. Daniela 13/06/2014
11 Investigate whether MIWP-5 can have a slot for a short meeting at the INSPIRE conference .. Michael L. 02/06/2014
12 Update project plan according the outcomes of the workshop and make it available to the participants .. Daniela 06/06/2014

Thursday, 15 May 2014

Participant list: Thijs Brentjens, Carlo Cipolloni, Tim Duffy, Sebastian Goerke, Marcin Grudzin, Paul Hasenohr, Daniela Hogrebe, Tomas Kliment, Marc Leobet, Darja Litheneger, Michael Lutz, Giocomo Martirano, Vanda Nunez Da Lima, Michael Östling, Peter Parlsow, Angelo Quaglia, Ilkka Rinne, Antonio Rotundo, Alejandra Sánchez Maganto, Michael Schulz, Markus Seifert, Robin Smith, Guillaume Sueur, Etienne Taffoureau, Robert Tomas

Welcome and approval of the agenda

Daniela welcomed all participants to the initial workshop of MIWP-5. Everyone introduced him/herself and his/her affiliation in a short tour de table.
The agenda as draftly proposed in this wiki was accepted.

  • Ilkka: Proposed project plan is only accessible to members of the Redmine MIWP-5 group.
    • [Action] Check Read (/Write) access to Wiki/Documents for workshop members
      (MS: That was requested at another point, but applies also to this remark)

MIWP-5 overview

Daniela presented a short overview of the objectives of MIWP-5, with the main target of developing a commonly agreed European validator for INSPIRE metadata, services (incl. QOS crit.) and datasets. With a major issue, that there is a great demand for such validation, especially since existing solutions yield differing results.

[Slides: 20140515_MIWP-5_overview.pdf]

Discussion on scope of validation and testing

Daniela proceeded with the first main topic on the agenda, the discussion of the definition of the scope of the working group / MIWP-5.

Definition of terms
Daniela gave an introduction to the various definitions of the terms that are mostly used in the context of this working group. Different organisations/stakeholders use similar terms for different things. Most important terms are:
  • conformance / conformity
  • compliance
  • validation
  • (conformance / interoperability) testing

[Slides: 20140515_MIWP-5_definition-of-terms.pdf]

Definition of terms - Discussion
  • Michael Lutz: Need to also define "conformance/conformity" testing. Is this the same as "validation"?
  • Marc: Should this group also cover "interoperability testing"? This may blow up the scope.
  • Carlo: One cannot do one without the other.
  • Daniela: Suggest to discuss this further when all introductory presentations were made.
  • Peter: it is recognised in the UK that the terms conformity, conformance and compliance are used differently in different countries. E.g. in the UK, compliance is reserved for compliance with law, whereas conformance is used with technical specifications. Thus it is important to specify the context in which the terms are used.
  • Carlo/Marc/Michael: More important to specify what a service/MD/data set is conformant/compliant WITH than which conformance/conformity/compliance term to use.
  • Robert: Propose to use "conformant" for requirements with clearly specified testing procedures and "compliant" for requirements that do not have a clear procedure defined.
  • Michael: Also define "IR requirement". "TG requirement", "recommendation"
  • Michael: May also want to check "OGC modular spec policy" (OGC-...), which has a detailed conceptual model for conformance and requirements in Annex C
    • Peter: GML 3.2.1 and GML 3.3 seems to be using slightly different terminologies - OGC seems to be evolving too.
  • Ilkka: Will one test (case) only just test one requirement?
    • Michael Schulz: Use "test" together with "test case", "test module", ...
A short reminder on previous discussions in Istanbul, 2012

Marc continued the workshop giving an overview on the previous discussions, e.g. at the WS in Istanbul. He stated that interoperability testing is not in the scope of MIWP-5. Several suggestions of the WS were proposed for TGs (and not applied?).

[Slides: Summary_of_DQ_Workshop_Istanbul.pdf]

  • Ilkka: Definition of term "Interoperability" is very broad
  • Robert: Recommendations from the workshop have been taken into account in final version of ATS in the DS.
  • Darja: Can compliance really only be declared if all tests in Part A (IR test) of the ATS are passed?
    • Michael/Robert: Yes. But ATS allow to declare also conformity with specific conformance classes, if not all tests are passed.
    • Giacomo: Finely granular conformance is very useful.
    • Marc: +1. This also helps data/service providers to learn gradually what is still missing for full coformity.
    • Daniela: Could be a useful 1st step for all resources (MD, services, data) to split the tests/requirements into classes.
Overview of the status of data specifications, abstract test suites

Robert reported on the status of data specifications with specific focus on the developed ATS documents. The ATS is structured in two parts, each with several conformity classes (CC). The TG structure has also been revised in that respect. IR requirements are transformed into TG requirements, TG requirements without explicit IR equivalent are moved to TG recommendations. Part 1 (normative) of ATS includes CC derived from IR requirements, thus if a dataset passes all CC tests, the value "conformant" can be applied in the conformity element. Part 2 includes TG requirements.

[Slides: ATS_Tomas_MIG_VAlidationWS_2014_RT1.pdf]

  • Requirements for extended data sets/schemas:
    • Tim: What does it mean?
    • Marc: Do not agree. Extensions should be possible, without being too much restricted by INSPIRE rules. Needs some more discussion. This group should focus on core of INSPIRE.
    • Peter:
      • There will be increasingly INSPIRE-compliant data that will be pointing to non-compliant data. This would need an extension that would point to the non-compliant data.
      • We should press on with the data testing even if some things around the GML encoding will change. Since some of the requirements may change over time, the URIs for tests need to be version-specific.
  • Ilkka: Are there also URIs for complete IR or TGs.
    • Michael: Should revisit URIs and patterns and make URIs resolvable.
  • Sebastian: How can the portrayal and metadata be tested for deriving conformance of data sets. How can this be done automatically?
    • DS has requirements in different groups --> conformance classes. These include relationships to NS for making available data in compliance (e.g. for data delivery and portrayal)?
  • Antonio: Conformity of MD - will the MD guidelines be updated?
  • Michael: Distinction between IR and TG requirements in the TG to be more clear on what the targets of testing are. This should probably also be done for other INSPIRE TGs.
eENVplus

Giacomo presented the projet eENV+ in which a validation service for datasets is developed. This validation service is implemented using the OGC TeamEngine. Datasets are validated using OGC GML 3.2 test. Additional tests are executed using (theme specific ?) Schematron rules.

[Slides: Martirano_validation_service.pdf]

  • will use schematron for tests that cannot be done with xml schema validation
  • validation service will be ETS implementing ATS
  • implementation uses OGC TEAM engine
  • also guidelines for manual execution of tests
  • offer to use eEnvplus funding to work on MIG requirements (within boundary conditions of contractional obligations)
  • Ilkka: Have you thought about implementing an API rather than just a web application? This would be useful for many people.
    • Giacomo: Started discussion with OGC (Luis Bermudez). Should be discussed by this group. Also some idea in CEN/TC 287 to establish a link between all EU-funded projects working on SDIs with OGC.
  • Daniela: From your experience, will it be possible to fully automate the ATS?
    • Giacomo: Will not be easy. But as an intermediate step best practices can help performing manual tests.
Data testing

Markus showed a use case for dataset specification testing on 12 diff. specifications (e.g. cadaster) with focus on conformity with Germany's AAA model (national base geodetic information model) and INSPIRE criteria. The usecase was developed together with interactive instruments and uses SoapUI as the test definition and execution plattform. An integration of this implementation with the GDI-DE Testsuite is also planned. Markus stressed his wish to keep dataset specification testing within the scope of this working package.

Relation to MIWP-16

Paul gave an overview of the relation between MIWP-5 and MIWP-16 (Monitoring & Reporting). Indicators that are specifically linked are MDi2, DSi2 and NSi4. The envisaged "dashboard" should be able to link to an agreed validator. An issue raised in the discussion was the probably different needs of granularity between a validator and the dashboard. Paul stressed in the discussion that from the MIWP-16 side the need for a validator for metadata has the highest priority.

  • Robin: Are the links between MIWP-16 and 5 sufficiently well covered?
    • Paul: Yes. The most important thing is that validator is commonly agreed and its API can be called from the dashboard.
    • Ilkka: Need to agree common data model for validation results.
    • Paul: Do not foresee any specific needs for the dashboard.
    • Peter: There may be a requirement to store also historic validation results.
    • Paul: Historic results could be stored by the dashboard rather than by the validator.
  • Marc: There is no legal basis that forces us to develop a common validator. But we all need the strength of indicators as feedback on reality --> tool for policy makers to understand status of implementation.
    The MIWP-5 and MIWP-16 are the two faces of the same coin : how to reach an European dashboard withput a common tool for testing?.
  • Giacomo: There is a time lag between M&R and validation/metadata available in real-time in the national geoportals. This needs to be taken into account.
  • Michael: What is the temporal dependencies? What would be the priority functionality to be developed by MIWP-5 (by September 2014)?
    • Paul: A commonly agreed validator for metadata (since the metadata will be the basis for the dashboard). The rest can come later.

After the different presentations that tried to present apsects of the needed scope of validation and testing an intensive discussion on an agreement on the scope and on conclusions followed. The agenda items break out groups and the requirements discussion were unanimously discarded in favor of the discussion.

[Action] Giacomo and Michael S. offered to draft a template for a requirements analysis based on use cases by combining their existing templates.

Agreement on scope
  • Further questions:
    • also include MD for interoperability?
      • Marc: should be both, but priority should be on MD for discovery.
  • Peter: IR requirements cannot be tested directly, so we can only test against TG.
    • Michael: Need to understand difference between IR and TG ...
    • Marc: If the common validator does not distinguish between IR and TG requirements, it is useless to France.
    • Michael: What would be an example of where the French validator does validate directly against the IR rather than the TG?
    • Etienne (?): example: useLimitation vs. ... - is not checked in the French validator (which is required by ISO 19115/19139)?
    • Michael: This indicates that we may need to revisit the TGs and divide the requirements in those that directly map to IRs and those that come in because of the technology chosen to implement the IRs (in this case ISO 19115/19139).
      • Marc : +1. We had harsh discusions about IR in the Committee, and it could be difficult for MS to accept a common validation tool based on TGs when too far away from the IR (I mean view and direct download services). Think about acceptability by policy subgroup.
    • Michael Oestling: Some of the ISO requirements may not make too much sense, but we should be following the requirements of international standards.
    • Marc: TGs should be opened to be updated based on implementation experience. We should not blindly follow standards (like ISO 19115/19139) if they don't work.
  • Michael S.: Can tests yield a legal statement/information for conformity? This might be an important requirement for MS.
    • Michael L.: A positive test result could be a good indicator for a legal conformity, whereas a negative result has no negative legal information at all. Overall no legal statements can be derived from automated tests.
      • Paul: Can the Commission informally state that a positive test result would prevent infringement for a specific part of the directive?
        • Michael L.: Could / should maybe be checked. Any decision leading to a kind of infringement process is always a case-by-case examination. But proof of positive test results, would probably be taken into account.
Conclusion:
  • Classify requirements in the TGs into:
    • IR requirements - requirements that directly reflect a requirement in the IRs (as implemented using the proposed technical solution, e.g. ISO 19115/19119/19139 or WFS 2.0 or Atom)
    • TG requirements - requirements specifying how the recommended technology or standard (e.g. WFS 2.0) has to be used so that the corresponding IR requirement is fulfilled.
    • Recommendations - are any additional recommendations to the IR requirements (e.g. to comply with additional requirements needed for compliance with standards, e.g. ISO 19139)
  • Also identify the requirements that cannot be tested automatically - it should then be discussed whether such requirements should be kept?
  • This classification of requirements/recommendations should be used as input to MIG tasks on updating TGs (e.g. MIWP-8 for MD)
    [Action] Etienne and Angelo to document the differences between French validator and MD TG requirements.
  • Marcin: How to deal with the tests that cannot be automated?
    • Michael: Manual tests will always need to be self-declared. This should be considered in the development of the common validator.
    • Marc: Drop (at least for the first step) any manual tests - costs to do these for hundreds of thousands of data sets is too big.
    • Giacomo: Don't just go the easy route directly. Start with clearly defining guidelines for manual tests.

Conclusion: focus on automatic testing

Endorsed reference implementation of validator(s)
  • Need for reference data/services/metadata?
  • How to keep reference implementation up-to-date?
    • Review code?
    • Report unexpected validation results and discuss whether test is implemented correctly
  • Testing products or services?
    • Instances: result is only valid for time of testing. Is required in INSPIRE to test actually instances.
    • Products: could be tested to be able to (if correctly configured) to be INSPIRE compliant.

[Slides: 20140515_MIWP-5_scope.pdf]

Agreement on project plan

[Action] Daniela suggested to update the project plan according the outcomes of this workshop and then make it available to the members of the workshop.

Potential support from ARE3NA

Postponed. If there are questions about ARE3NA, please send an e-mail to Robin.

Friday, 16 May 2014

Participant list: Thijs Brentjens, Carlo Cipolloni, Tim Duffy, Sebastian Goerke, Marcin Grudzin, Paul Hasenohr, Daniela Hogrebe, Tomas Kliment, Darja Litheneger, Michael Lutz, Giocomo Martirano, Vanda Nunez Da Lima, Michael Östling, Peter Parlsow, Angelo Quaglia, Ilkka Rinne, Antonio Rotundo, Alejandra Sánchez Maganto, Michael Schulz, Robin Smith, Guillaume Sueur, Etienne Taffoureau, Robert Tomas

Welcome, approval of the agenda and wrap-up first day

At the beginning of this day, Daniela raised the question who could potentially lead the working group, as she will have no resources to do that. Carlo (ISPRA) stepped forward and offered, that he could lead the wp.

Endorsing the reference implementation
  • Tim: Should be the responsibility of the MIG
    • Ilkka: Would need some light-weight open source project management structure and processes. There could be a (small) steering committee. Need to set up a procedure for maintaining the implementation.
    • Angelo: It is important to specify who decides in case of disagreements on how to interpret a requirement / change a test.
    • Michael: Need to have broad agreement because disagreements on tests ultimately means disagreement on interpretation of requirements in the TG.
Timing/sub-groups
  • start with MD+NS or start with MD+NS+DS in parallel?
    • Michael: both groups could learn from each other.
    • Vanda: make use of the fact that there are volunteers for all three components.
    • Michael/Sebastian: 3 parallel groups will need very good communication and overall project management, because of dependencies between the tests
    • Ilkka: Are we talking about 3 software projects?
    • Robin: There is a lot of preparatory work before we can start software development
    • Darja: Intial tasks are well defined and can be performed in parallel.
    • Paul: Should have the use cases and requirements (functional and non-functional) clearly defined and agreed. This includes the governance structure and processes. Only then can we go into parallel sub-groups.
    • Peter: The only thing that can be parallelised is the development of ATS where there are missing.

Presentation of existing solutions for validation and testing (metadata and services)

OGC TEAM Engine

Sebastian represented the OGC CITE team, giving an overview of OGC activities and structures wrt. conformance testing and certification. TeamEngine the software used and developed by the OGC uses CTL and TestNG for the definition of tests. TeamEngine is now available as OpenSource software on github. A quite well defined workflow how tests are developed, reviewed and published exists at OGC.

[Slides: OGC_CITE_for_INSPIRE.pdf]

  • plugin for running SoapUI tests inside TEAM engine
    • still only feasibility test (closed code) - not yet included in open GitHub code
  • Slide on profiles: OGC could provide testing and certificates for "INSPIRE profiles"
    • [Action] Sebastian to clarify with Luis how he would envisage this working in detail (with some specific examples).
  • Can CITE engine be changed so that they don't require pre-defined test data?
    • Sebastian: Work in progress, e.g. for WMS 1.1.0
  • Giacomo: CEN/TC 287 is working on gathering experiences (together with OGC) on issues with standards ....
  • Michael: Who can develop test suites? What is the process? Who decides about updates/changes?
    • Sebastian: When test suite is first developed the OGC TC has to vote about it.
  • An issue with CS-W validation (that prevents CS-W with extended INSPIRE capabilities from being validated) has been raised in Sept 2013 and still has not been addressed.
    • [Action] Sebastian and Michael to discuss with OGC (Athina, Luis, ...) how such issues can be given a higher priority by OGC.
INSPIRE Geoportal validator

Angelo gave a short overview of the current Inspire Validator. He stressed the importance of collaboration and testing of ressources against different validators as a way to identify differing interpretations and bugs.

  • Peter/Ilkka/Michael O.: Issues with validator / suggestions for improvements or new features should be raised in an open issue tracker
  • Michael Oestling: Good service. Would be good if error messages could be logged in an open issue tracker.
  • Giacomo: Will the MD elements for interoperability be included in JRC validator and by when?
    • Angelo: Yes, this can be done over the next few weeks. But they are only legally required at the moment for newly created or extensively restructured data sets.
    • Paul: A failure on these tests should not produce a "warning", but just an "information"
    • Michael/Peter: Validators for data (and maybe metadata) may also ask the data provider whether their data set has been created or extensively restructured after the entry into force of the INSPIRE Directive (15/5/2007). Depending on the answer, different tests should be performed and/or failure on specific tests should result in an error or just an information.
Validator of Planetek/lat-lon

Sebastian shortly presented the newly developed Inspire Geoportal Validator, that is currently finished in development, but not yet published. The validator is integrated in the harvesting workflow of the Inspire Geoportal. No complete QoS tests available.

  • What are the differences?
    • Sebastian: all rules have been accepted by JRC.
    • Differences between both validators have not been formally analysed --> potential differences.
Testsuite

Michael presented the validation solution of the german NSDI: GDI-DE Testsuite. It is based on an older OGC TeamEngine version, only CTL-based tests are currently supported. Update to current TeamEngine version during summer 2014. Complete QoS-Tests available.

[Slides: 2014-05-16-MIWP5-Workshop-Testsuite_intro.pdf]

  • ATS developed already for Atom Part of Download Service TG (encoded in CTL) -- could be a starting point for classification of NS TG requirements and development of ATS
French validation for metadata: a State experience for monitoring

Etienne presented the french validation service that is part of the M&R procedure in France. It is based on the old inspire Schematron validator (?) and tests capabilities of services.

  • Michael S.: Can schematron rules resolve URIs and request resources from there? This would be interesting for Poland and UK.
    • Etienne: Yes. --> Pls. share examples.
  • Michael L.: Would be interesting to compare different implementations of IR and TG requirements, e.g. in a table.
Polish metadata validation tool

Marcin showed the setup of the polish metadata validation service. Web-based fronted or WPS endpoint. Resources can be tested against different rulesets / profiles (also national polish requirements).

[Slides: Prezentacja_Ispra_20140516.pdf]

  • rules are closed source (C#), but Marcin will investigate whether these can be shared.
  • WPS is actually a good example of an Inspire Invoke Service
Spanish-SDI validation service and metadata

Alejandra presented the current validator of the spanish NSDI. The validator is based on the old Inspire schematron validator. Strong focus on usability and understandable error reporting.

[Slides: 20140516_SpanishNSDI_validator.pdf]

  • Simple error messages in Spanish. Important to also include suggestions on how to solve the problems reported.
ELF Test Framework

Thijs presented the ELF (former ESDIN) test framework.

[Slides: 20140515-workshop-validation-conformity-NL-validation-NetworkServices.pdf]

  • Why using SoapUI and not TEAM engine?
    • Check with Clemens Portele why the decision was made for SoapUI rather than for TEAM engine.
    • Sebastian: Plug-in for executing SoapUI tests (not part of open code (yet?))
  • Could there be a platform-independent formal language to define tests?
  • Need to be careful about error reporting - traffic lights? How to distinguish mandatory and optional parts?
  • Why not implement QoS requirements?
    • Thijs: Most download services are Atom download services? Usually you will easily meet performance and capacity requirements?
Spatineo validator

Ilkka presented the company Spatineo and their relevant product Spatineo Monitor. Focus of Monitor is access, availabilty and performance analysis. Also capabilities validation of view services is done. Direct linkage between validation issue and capabilities source text. Spatineo would like to contribute to an open source development of common agreed validator.

[Slides: Spatineo_validation_rinne_static.pdf]

Neogeo-online

Skipped because Guillaume already had to leave.

Wrap-up workshop and next steps

At the end during the wrap-up session, several outcomes and actions of this workshops were commonly agreed formulated and actions with responsibles were derived:

  • Set-up sub-group (before summer)
    • Draft ToR for sub-group on validation and conformity testing (outcome, actions, timeline, resources, …)
    • Identify the lead (done: Carlo)
    • Call for participants to the MIG and PoE
  • Use case definitions for the common tool based on a template (Giacomo, Michael Sch.)
    • derive requirements
  • [Action] Review TG for metadata and classify the requirements based on a template of data specs.
    • (Paul, Etienne, Antonio, Michael O., Alejandra)
    • Compare different implementations of the requirements based on this review
  • [Action] Review TG for services and classify the requirements based on a template of data specs.
    • (Angelo?, Michael L., Michael Sch., Thijs?, Ilkka?, Etienne?); Alejandra will provide document as an input
    • Compare different implementations of the requirements based on this review
  • [Action] From ATS to ETS: explore ATS on how the tests could be implemented
    • pilots: protected sites, natural risk zones, land use (Angelo, Giacomo, Robert, Carlo, Markus, Darja)
  • [Action] Prepare an overview table with all existing validator solutions according to specific characteristics.
  • Regular teleconferences: every two weeks
  • [Action] Michael to investigate whether we can have a slot for a short meeting at the INSPIRE conference.

20140515_MIWP-5_overview.pdf (199 KB) Daniela Hogrebe, 27 May 2014 04:18 pm

20140515_MIWP-5_definition-of-terms.pdf (333 KB) Daniela Hogrebe, 27 May 2014 04:20 pm

20140515_MIWP-5_scope.pdf (366 KB) Daniela Hogrebe, 27 May 2014 04:20 pm

ATS_Tomas_MIG_VAlidationWS_2014_RT1.pdf (1.06 MB) Daniela Hogrebe, 27 May 2014 04:21 pm

Martirano_validation_service.pdf (418 KB) Daniela Hogrebe, 27 May 2014 04:21 pm

OGC_CITE_for_INSPIRE.pdf (3.12 MB) Daniela Hogrebe, 27 May 2014 04:31 pm

20140516_SpanishNSDI_validator.pdf (941 KB) Daniela Hogrebe, 02 Jun 2014 04:55 pm

Prezentacja_Ispra_20140516.pdf (805 KB) Daniela Hogrebe, 02 Jun 2014 04:55 pm

Summary_of_DQ_Workshop_Istanbul.pdf (148 KB) Daniela Hogrebe, 02 Jun 2014 04:55 pm

Spatineo_validation_rinne_static.pdf (2.87 MB) Daniela Hogrebe, 05 Jun 2014 11:47 am

2014-05-16-MIWP5-Workshop-Testsuite_intro.pdf (807 KB) Daniela Hogrebe, 06 Jun 2014 10:36 am

20140515-workshop-validation-conformity-NL-validation-NetworkServices.pdf (860 KB) Daniela Hogrebe, 16 Jun 2014 09:58 am

2014-05-14_INSPIRE-MIG_data_testing_Markus.pdf (238 KB) Markus Seifert, 16 Jun 2014 02:58 pm