- Title: OntoBench: Generating Custom OWL 2 Benchmark Ontologies
- Authors: Vincent Link, Steffen Lohmann and Florian Haag
- Call: ISWC 2016 Resources Track
- Preprint: http://www.visualdataweb.org/publications/2016_ISWC_OntoBench_preprint.pdf
- Published version: https://iswc.lodac.nii.ac.jp/files/resource_88.pdf https://doi.org/10.1007/978-3-319-46547-0_13
- Resources: http://ontobench.visualdataweb.org/ https://github.com/VisualDataWeb/OntoBench
- Review by: Stian Soiland-Reyes (3/3)
- Outcome: Accepted
This review is licensed under a Creative Commons Attribution 4.0 International License.
Overall evaluation: 2: Accept
Reviewer's confidence: 4: (high)
Resource type: Software
Were resource specific criteria used in the review?: Yes
Were you able to access the resource? The program committee considers the resource as well as the paper in its review process. Therefore, authors must ensure unfettered access to the resource during the review process, ideally by the resource being cited at a permanent location specific for the resource.
4: Yes I could fully access the resource
This paper describes a tool for generating custom benchmark ontologies for test purposes. The tool allows a selection of pretty much every OWL 1 and 2 feature, and is available both as a Java tool and a web service.
- Very useful tool for testing of ontology tools
- Further evaluation of how the generated ontologies appear or break different OWL tools would be good.
Edit: The authors have agreed to provide further evaluation for the final version of the paper.
Does the resource break new ground? Does the resource plug an important gap? Does the resource provide additional value with respect to similar earlier efforts? Is the resource of interest to the Semantic Web community? Is the resource of interest to society in general? Will the resource have an impact, especially in supporting the adoption of Semantic Web technologies?
4: Excellent
This tool should be of general interest to creators of Semantic Web tools, for testing and documentation purposes.
Is there evidence of usage by a wider community beyond the resource creators or their project? Alternatively, what is the resource’s potential of being (re)used, for example, based on the activity volume on discussion forums, mailing list, issue tracker, support portal, etc? Is the resource easy to (re)use? For example, does it have good quality documentation? Are there tutorials availability? etc Is the resource general enough to be applied in a wider set of scenarios, not just for the originally designed use? Is there potential for extensibility to meet future requirements (e.g., upper level ontologies, plugins in protege)?
4: Excellent
- Approachable user interface
- URIs per generated ontology; can be put straight into say Protege
- Source code builds out of the box with Maven
Does the design of the resource follow resource specific best practices? Did the authors perform an appropriate re-use or extension of suitable high-quality resources? For example, in the case of ontologies authors might extend upper ontologies and/or reuse ontology design patterns. Is the resource suitable to solve the task at hand? Does the resource provide an appropriate description (both human and machine readable), thus encouraging the adoption of FAIR principles? Is there a schema diagram? For datasets, is the description available in terms of VoID/DCAT/DublinCore?
4: Exemplary
Builds on standard tools (OWL API, Spring), comprehensive coverage of OWL 1 and 2.
Mandatory: Is the resource published at a persistent URI (PURL, DOI, w3id)? Mandatory: Is there a citation associated with the resource? Mandatory: Does the resource provide a licence specification? (See creativecommons.org, opensource.org for more information) Is the resource publicly available? For example as API, Linked Open Data, Download, Open Code Repository. Is the resource publicly findable? Is it registered in (community) registries (e.g. Linked Open Vocabularies, BioPortal, or DataHub)? Is it registered in generic repositories such as FigShare, Zenodo or GitHub? Is there a sustainability plan specified for the resource? Is there a plan for the maintenance of the resource?
3: Good
No Zenodo archive of code, no DOI citation of software.
Software is only linked to from middle of text, would have appreciated footnotes and DOI citations.
Clear license, Availability on GitHub.
4: (Lucid: Very well written in every aspect, a pleasure to read, easy to follow)
Great presentation, well justified and good explanations.
While the paper shows the tool used with WebVOWL, further evaluation of how the generated ontologies appear or break different OWL tools would be good.
Edit: Authors have promised that the final version will provide additional evaluation information testing ontology editors (e.g. Protégé, OWLGrEd) and reasoners (e.g. Pellet).
Hi, I am Stian Soiland-Reyes http://orcid.org/0000-0001-9842-9718 and believe in open reviews.
This review is also available at the secret URL https://gist.github.com/stain/702acc9a924d0ed554d039063f0a91a7
I would appreciate if you could contact soiland-reyes@cs.manchester.ac.uk if you agree on me publishing this review.
This review is licensed under a Creative Commons Attribution 4.0 International License http://creativecommons.org/licenses/by/4.0/
Edit: The authors have agreed to making the review public, and have volunteered to provide a preprint of the camera-ready version.
Authors' response retracted
Retracted
Retracted
Thank you for your good response. Could you for the final version:
-
Make the (current version of) the code itself archived and citable, e.g. https://guides.github.com/activities/citable-code/ with Zenodo/GitHub, or equivalent with Figshare.
-
Add Citations for the resource web site http://ontobench.visualdataweb.org/ and the code archive (see above)