Skip to content

Instantly share code, notes, and snippets.

@NTerpo
Last active April 9, 2016 14:33
Show Gist options
  • Save NTerpo/b81a0b195ceb99a7e53a to your computer and use it in GitHub Desktop.
Save NTerpo/b81a0b195ceb99a7e53a to your computer and use it in GitHub Desktop.
{
"language": "en",
"name": "Paris Data",
"description": "City of Paris Open Data portal",
"url": "http://opendata.paris.fr/",
"linked_portals": ["http://data.gouv.fr", "http://data.iledefrance.fr"],
"data_language": ["fr"],
"modified": "2016-03-04T13:44:44+00:00",
"themes": ["Culture, Heritage", "Education, Training, Research, Teaching", "Environment", "Transport, Movements", "Spatial Planning, Town Planning, Buildings, Equipment, Housing", "Health", "Economy, Business, SME, Economic development, Employment", "Services, Social", "Administration, Government, Public finances, Citizenship", "Justice, Safety, Police, Crime", "Sports, Leisure", "Accommodation, Hospitality Industry"],
"links": [
{"url": "http://opendata.paris.fr/explore/download/", "rel": "Catalog CSV"},
{"url": "http://opendata.paris.fr/api/", "rel": "API v1"},
{"url": "http://opendata.paris.fr/api/datasets/1.0/search?format=rdf", "rel": "Catalog RDF"}
],
"version": "1.0",
"number_of_datasets": 176,
"organization_in_charge_of_the_portal":{
"name": "City of Paris",
"url": "http://www.paris.fr/"
},
"spatial": {
"country": "FR",
"coordinates": [
48.8567,
2.3508
],
"locality": "Paris",
"data_spatial_coverage": "a Geojson with the data coverage"
},
"type": "Local Government Open Data Portal",
"datapackages": [
"http://opendata.paris.fr/explore/dataset/liste_des_sites_des_hotspots_paris_wifi/datapackage.json",
"http://opendata.paris.fr/explore/dataset/points-de-vote-du-budget-participatif/datapackage.json",
"http://opendata.paris.fr/explore/dataset/cinemas-a-paris/datapackage.json"
]
}
@jpmckinney
Copy link

I created a fork which you can diff. Some fields are missing as we would need to find a way to add them back in. Main changes:

  • name -> title
  • url -> homepage (also inside organization_in_charge_of_the_portal)
  • version -> conformsTo (and change the value to the eventual URL of the schema)
  • organization_in_charge_of_the_portal -> publisher

Other feedback:

  • language should use BCP 47, which is a superset of ISO 639.
  • publisher is not limited to a name. See documentation. It can be a foaf:Agent, which can have lots of properties.
  • data_language: I tend to agree with Phil. If people want to filter to only data in a language they understand, they need to filter at the dataset level, not the catalog level. Most data is nonlinguistic (geospatial, CSVs of numbers, etc.). Filtering at the catalog level would eliminate a lot of potentially relevant datasets. I recommend removing this field.
  • spatial is not much specified by DCAT. The EU's DCAT-AP recommends using its Core Location Vocabulary. data.json specifies possible values for a dataset's spatial field. Canada's dataset's spatial field allows GeoJSON. I'd like to see an example of a GML Simple Features Profile (one of the options in data.json) before deciding between GeoJSON or GML.
  • type Until a controlled vocabulary is defined, I don't recommend adding the field.
  • links: I recommend either following Phil's suggestion, or moving this out of this file, and instead recommending the use of discovery standards like Web Host Metadata, which has a JSON representation.
  • datapackages: I didn't really understand what this field was doing the first time, but I think it should be eliminated in favor of just using dataset from DCAT, in the way that Phil describes. That said, if the purpose of dataportal.json is to just describe the catalog, not the datasets, then datapackages should be eliminated entirely.
  • themes: While DCAT does have themeTaxonomy at the catalog level, its representation won't be as simple as an array of strings. It would be like:
"@context": {
  "containsConcept": { "@reverse": "http://www.w3.org/2004/02/skos/core#inScheme " }
},
"themeTaxonomy": {
  "containsConcept": [
    {"prefLabel": "Environment"},
    {"prefLabel": "Health"}
  ]
}

@NTerpo
Copy link
Author

NTerpo commented Mar 28, 2016

Cool for the fork :)

  • language : fine for BCP 47
  • publisher : yeah I know there is the FOAF possibility. But going through DCAT is already a huge step for most of the portals owner we meet, I'm pretty sure most stop before getting to FOAF. I agree that's annoying but as open data goes, there will be more and more people in charge of open data not knowing anything about a norm or linked between ontologies. That means we have to do the job for them. We can extract the most useful Foaf field and make them mandatory.
  • data_language : I agree about the filter but that is not the use case I imagine. I may want to create a world map of Open Data portals and add a language facet. Or maybe compare the global trends in Open Data between Spanish-speaking or English-speaking countries. For most portals it's kind of easy to say "there are data in both french and English". Filtering and the search for datasets should be handled by the portal and has nothing to do with dataportal.json.
  • type defining a controlled vocabulary should not be too complicated (or I'm still a bit naive here haha). What about :
    • Multi-national Open Data Portal
    • Country Open Data Portal
    • Agency Open Data Portal
    • Local Government Open Data Portal
    • Business Open Data Portal
    • Non-Profit Open Data Portal
    • Individual Open Data Portal
  • I don't really understand what is the problem with the links fields. dataportal.json or data.json should describe the portal and be useful. The links field should be like a 'what's next'. You obviously have to get some documentation to use an API but if I want to compare Open Data portals around the world I want to know if there is an API available (and obviously link to it), I want to know if there is an turtle file to describe the catalog. The real goal of the dataportal.json is to allow this kind of comparison and the development of new services to allow people to find data.
  • datapackages : yeah it may be removed => it's redundant with data.json. Linking to the datasets metadata is important. We likeddatapackagesbecause it's really light but CAT metadata are totally fine. Link to the datasets meta (or todata.json) can be done in thelinks` field.
  • I don't really understand why the themeTaxonomy representation is better? It maybe more grammatically correct but it feels more 'obscure' for a beginner.
  • linked-portals indeed looks like "blog rolls" but "blog rolls" became obsolete once there where sufficiently content, users, and data about it to replace it. Until it's not economically viable to develop a real user friendly Google for data, "blog rolls" looks like a cheap and useful way to do. For now we still rely on SEO in looking for data, but SEO optimize for content, not really data and I'm not sure we want Google to be the gateway to open data. linked-portals field is clearly a short-term solution, but it's a solution. Also, I do agree that federated data are a really nice solution : let's have a way to give the information directly in the portal meta-data (not only at the dataset level) :)

Thanks once again for the comments.

@jpmckinney
Copy link

  • publisher: Implementers of data.json (and of dataportal.json) don't need to know the RDF ontologies on which they are based. They just need to know the JSON Schema, the definition of the JSON fields, which fields are required, etc. So, using FOAF behind the scenes doesn't add any burden to adopters, because they don't need to know that FOAF is being used.
  • data_language: For the use cases you describe, which of those can't be handled by the other language field?
  • type: The types here seem to all be describing the publisher. If that's all we want, then we should add a classification field to the publisher (from W3C Organization Ontology). I think it would be fair to leave the code list open for consultation longer than the schema, as there is often more disagreement around what values to include in a code list.
  • datapackages: Yes, I agree with removal. It's strange to set up a two-tier system where some datasets would appear under data.json's datasets, and others would be promoted to a special datapackages field - especially considering the low adoption of Data Packages.
  • themeTaxonomy: It's not better, I was just giving an example of how it would need to be done for this file to be expressible as JSON-LD, and which is how data.json is likely to do it, since they are already compliant with JSON-LD.
  • linked_portals: Standards are designed as long-term solutions, not short-term solutions. I think there are alternatives you can pursue to fulfill your use case here. I don't think it's appropriate to put this into a proposed standard.
  • links: I think there is a fairly significant risk that this field will become useless across catalogs, and will just become a field with a wide range of links that publishers consider relevant to share. A standard is only useful if the data from different publishers is comparable. A highly-varied list of links would not be useful...

@NTerpo
Copy link
Author

NTerpo commented Apr 4, 2016

  • publisher: I totally agree with you, we should use FOAF behind the scene.
  • data_language: the other language field only describes the document dataportal.json when data_language aims to describe the data themselves.
  • type : yes I also believe there may be a lot of discussion about what to include in the list. But that's the whole point of our approach : let usage decide and give time to experimentation.
  • themeTaxonomy : ok :)
  • linked-portals, links : I do understand your point, but I don't exactly want to design a long-term standard. I would prefer a set of common practices : something that's really easy for people/org to implement right now, that can give concrete return right away and something that will be easy (because of the easy implementation) to abandon the day Linked Data is mainstream or at least every issues has a better solution. Both fields may become irrelevant, but at least we will have an idea of what publishers want to link to in real life. If the whole document is pushed as a set of common practices and not as a standard, when the publishers will implement it (if they want, which is not sure haha) they will make a trade-off between how they think they will optimize their data diffusion and how other data portals are dealing with that document. For now we have consistent and compliant standards, but we don't have a lot of portals describing their catalogs. We have to understand why and how we can design something they will really use.

@jpmckinney
Copy link

The best way to understand why a standard isn't being adopted is to ask the potential adopters (with an unbiased questionnaire, methodology, etc.). That said, I don't think the problem is that DCAT is too hard. I think it's that:

  1. Publishers don't know what standards to adopt. When talking to publishers, this is really the most common reason in my experience.
  2. Publishers don't know how to interpret the standard's documentation. The solution to this is to provide good documentation written for implementers for existing standards. The W3C documentation for DCAT is written for RDF experts; a user-friendly, implementation-focused, jargon-free version of those docs would go a long way towards easing adoption.
  3. Publishers are using third-party software that doesn't provide machine-readable catalog metadata out of the box. The solution to this isn't to introduce some new practice – which will similarly not be adopted by those suppliers. The solution is to convince the major suppliers to implement a common standard (like DCAT).

In short we need:

  1. Awareness building
  2. Better documentation
  3. Vendor adoption

Creating some new format is not a solution for any of those. I really don't believe the problem is, "DCAT is hard." Let's at least validate what the real problems are before investing time and effort into a solution. Does that make sense?

@jpmckinney
Copy link

Anyway, for better alignment with DCAT please change:

  • name -> title
  • url -> homepage
  • version -> conformsTo and change the value to the URL for the documentation of this format (which should be a versioned web page)
  • organization_in_charge_of_the_portal -> publisher
    • url -> homepage
  • spatial -> make the value an actual GeoJSON feature, so:
{
  "type": "Feature",
  "geometry": {
    "type": ...,
    "coordinates": ...
  },
  "properties": {
    "name": "Paris",
    "country": "FR"
  }
}

@ColinMaudry
Copy link

I've created a draft JSON-LD fork in order to actually enable round tripping with RDF: https://gist.github.com/ColinMaudry/5163ecade149a837aa25694fdd7ac46f. It's still incomplete, but it gives an idea.

And here is how it behaves when processing the JSON to RDF with the context: http://tinyurl.com/hdza9yp

Suggestions if we want to go further in that direction:

  • type value should either be a keyword that we can resolve to a URI (ex: local-government) or a URI
  • As-is, themes values cannot be used in a UI in another language than English. A pity for French data :) Setting up a list of themes URI would enable multilingual support. As for type, in the JSON, the themes value could either be lower case keywords in English or URIs
  • I'm not very comfortable with property name in plural form. I assume it's a hint to know that the value is an array.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment