Skip to content

Instantly share code, notes, and snippets.

@stain
Created March 26, 2015 11:40
Show Gist options
  • Save stain/aa76155cb2201cfcd4c2 to your computer and use it in GitHub Desktop.
Save stain/aa76155cb2201cfcd4c2 to your computer and use it in GitHub Desktop.
Delivered-To: stian@s11.no
Received: by 10.28.194.7 with SMTP id s7csp319380wmf;
Thu, 26 Mar 2015 04:18:41 -0700 (PDT)
X-Received: by 10.70.131.107 with SMTP id ol11mr25832359pdb.63.1427368720951;
Thu, 26 Mar 2015 04:18:40 -0700 (PDT)
Return-Path: <>
Received: from mail.apache.org (hermes.apache.org. [140.211.11.3])
by mx.google.com with SMTP id cu5si7897968pbc.126.2015.03.26.04.18.40
for <stian@s11.no>;
Thu, 26 Mar 2015 04:18:40 -0700 (PDT)
Received-SPF: none (google.com: mail.apache.org does not designate permitted sender hosts) client-ip=140.211.11.3;
Authentication-Results: mx.google.com;
spf=none (google.com: mail.apache.org does not designate permitted sender hosts) smtp.mail=
Message-Id: <5513eb10.05b2440a.2972.18e4SMTPIN_ADDED_MISSING@mx.google.com>
Received: (qmail 56196 invoked by uid 500); 26 Mar 2015 11:18:39 -0000
Delivered-To: apmail-stain@apache.org
Received: (qmail 56191 invoked for bounce); 26 Mar 2015 11:18:39 -0000
Date: 26 Mar 2015 11:18:39 -0000
From: MAILER-DAEMON@apache.org
To: stain@apache.org
Subject: failure notice
Hi. This is the qmail-send program at apache.org.
I'm afraid I wasn't able to deliver your message to the following addresses.
This is a permanent error; I've given up. Sorry it didn't work out.
<dev@commonsrdf.incubator.apache.org>:
192.87.106.230 failed after I sent the message.
Remote host said: 552 spam score (5.6) exceeded threshold (RCVD_IN_BRBL_LASTEXT,RCVD_IN_XBL
--- Below this line is a copy of the message.
Return-Path: <stain@apache.org>
Received: (qmail 52554 invoked by uid 99); 26 Mar 2015 11:18:13 -0000
Received: from mail-relay.apache.org (HELO mail-relay.apache.org) (140.211.11.15)
by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 26 Mar 2015 11:18:13 +0000
Received: from mail-wi0-f179.google.com (mail-wi0-f179.google.com [209.85.212.179])
by mail-relay.apache.org (ASF Mail Server at mail-relay.apache.org) with ESMTPSA id BD3571A0370
for <dev@commonsrdf.incubator.apache.org>; Thu, 26 Mar 2015 11:18:12 +0000 (UTC)
Received: by wixm2 with SMTP id m2so9098048wix.0
for <dev@commonsrdf.incubator.apache.org>; Thu, 26 Mar 2015 04:18:11 -0700 (PDT)
X-Gm-Message-State: ALoCoQnHtkq2RG6wKzINf5eH0on0NYnTav+N0xKVbUKoSG7OSNhHB+yJIpi+qwlCf55ByMYaW8JV
X-Received: by 10.181.13.50 with SMTP id ev18mr10144389wid.70.1427368691637;
Thu, 26 Mar 2015 04:18:11 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.28.29.8 with HTTP; Thu, 26 Mar 2015 04:17:51 -0700 (PDT)
X-Originating-IP: [92.40.248.4]
In-Reply-To: <CAMBJEmVLfB2_VV20d4WMZS3AGd4ai-FKAXX4+E+2OqXr6D=skQ@mail.gmail.com>
References: <CAMBJEmWC-dV6p1HZ7cvkKJtE0jks-EZJJedK92KG49SnX-oTAA@mail.gmail.com>
<CAMBJEmVVsO+D6oqMr6+k=MFWSs4Krv2W23wEbzb7hErQbiQ16g@mail.gmail.com>
<CAGYFOCS+5YiTK9MS114Pj8kDQ-ZQ9ShWvn4DGR4H9RWaOceB4g@mail.gmail.com> <CAMBJEmVLfB2_VV20d4WMZS3AGd4ai-FKAXX4+E+2OqXr6D=skQ@mail.gmail.com>
From: Stian Soiland-Reyes <stain@apache.org>
Date: Thu, 26 Mar 2015 11:17:51 +0000
Message-ID: <CAMBJEmWd5m_vdz0R0AP_Zdk2bTGdJ82AH0mrFVPXXyxuL0ZEMA@mail.gmail.com>
Subject: Re: Hashcode definition
To: dev <dev@commonsrdf.incubator.apache.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
.. I would also would have liked the IRI hashcode to be defined as
equal to its getIRIString() hashCode, so that a similar optimization
is obvious for the datatype.
(if all hashcodes are defined nicely as Reto suggest, then of course
you can do them all inline anyway - but if we can avoid ambiguity we
can also describe the fields to hash in abstract "the data type"
rather than using method names, which could be interpreted as if the
hashCode MUST call those methods (e.g. to cater for subclassing).
On 26 March 2015 at 08:55, Stian Soiland-Reyes <stain@apache.org> wrote:
> Likewise I would also like the IRI hashcode to be equal to its iriString
> hashcode, so that a similar optimization is possible for the datatype.
>
> On 26 Mar 2015 01:52, "Peter Ansell" <ansell.peter@gmail.com> wrote:
>>
>> Hi Stian,
>>
>> We would be best to not use Optional in the hashCode definition,
>> incase people are actually storing "null" or another sentinel value
>> internally and only adding Optional on to satisfy our API. Otherwise
>> they will need to refer to Optional each time to get the hashCode, or
>> if they are using immutable objects, they would need to refer to it
>> for each object creation, both of which would be sub-optimal if they
>> are trying to optimise that part of their system.
>>
>> Cheers,
>>
>> Peter
>>
>>
>> On 26 March 2015 at 12:21, Stian Soiland-Reyes <stain@apache.org> wrote:
>> > Why multiply with 5 in the IRI ? Just to spread it from a hash of the
>> > IRI
>> > and its String?
>> >
>> > It means the "hashcode of the data type" in Literal can be slightly
>> > ambiguous. Perhaps "hashcode of the #getDataType() IRI" ? It also
>> > hammers
>> > in through getDataType that every Literal has a datatype, e.g. it is
>> > always
>> > added to the hash.
>> >
>> > BTW, hashcode of the Optional language is conveniently compliant with
>> > "plus
>> > hash code of language if present", so no similar ambiguity there.
>> >
>> >
>> > https://docs.oracle.com/javase/8/docs/api/java/util/Optional.html#hash=
Code--
>> > On 24 Mar 2015 12:25, "Reto Gm=C3=BCr" <reto@apache.org> wrote:
>> >
>> > On Mon, Mar 23, 2015 at 12:04 PM, Andy Seaborne <andy@apache.org> wrot=
e:
>> >
>> >> On 23/03/15 10:25, Reto Gm=C3=BCr wrote:
>> >>
>> >>> Right now the API on Github says nothing about the identity and
>> >>> hascode
>> > of
>> >>> any term. In order to have interoperable it is essential to define t=
he
>> >>> value of hashcode and the identity conditions for the rdf-terms whic=
h
>> >>> are
>> >>> not locally scoped, i.e. for IRIs and Literals.
>> >>>
>> >>
>> >> +1
>> >>
>> >>
>> >>> I suggest to take the definitions from the clerezza rdf commons.
>> >>>
>> >>
>> >> Absent active JIRA at the moment, could you email here please?
>> >>
>> >> Given Peter is spending time on his implementation, this might be qui=
te
>> >> useful to him.
>> >>
>> >> Sure.
>> >
>> > Literal: the hash code of the lexical form plus the hash code of the
>> > datatype plus if the literal has a language the hash code of the
>> > language
>> >
>> >
>> > https://git-wip-us.apache.org/repos/asf?p=3Dclerezza-rdf-core.git;a=3D=
blob;f=3Dapi/src/main/java/org/apache/commons/rdf/Literal.java;h=3Dcf5e1eea=
2d848a57e4e338a3d208f127103d39a4;hb=3DHEAD
>> >
>> > And the IRI: 5 + the hashcode of the string
>> >
>> >
>> > https://git-wip-us.apache.org/repos/asf?p=3Dclerezza-rdf-core.git;a=3D=
blob;f=3Dapi/src/main/java/org/apache/commons/rdf/Iri.java;h=3De1ef0f7d21a3=
cb668b4a3b2f2aae7e2f642b68dd;hb=3DHEAD
>> >
>> > Reto
>> >
>> > Andy
>> >>
>> >>
>> >>
>> >>> Reto
>> >>>
>> >>> On Mon, Mar 23, 2015 at 10:18 AM, Stian Soiland-Reyes
>> >>> <stain@apache.org>
>> >>> wrote:
>> >>>
>> >>> OK - I can see on settling BlankNode equality can take some more ti=
me
>> >>>> (also considering the SPARQL example).
>> >>>>
>> >>>> So then we must keep the "internalIdentifier" and the abstract
>> >>>> concept
>> >>>> of the "local scope" for the next release.
>> >>>>
>> >>>> In which case this one should also be applied:
>> >>>>
>> >>>> https://github.com/commons-rdf/commons-rdf/pull/48/files
>> >>>> and perhaps:
>> >>>> https://github.com/commons-rdf/commons-rdf/pull/61/files
>> >>>>
>> >>>>
>> >>>>
>> >>>> I would then need to fix simple GraphImpl.add() to clone and change
>> >>>> the local scope of the BlankNodes:
>> >>>> .. as otherwise it would wrongly merge graph1.b1 and graph2.b1 (in
>> >>>> both having the same internalIdentifier and the abstract Local Scop=
e
>> >>>> of being in the same Graph). This can happen if doing say a copy fr=
om
>> >>>> one graph to another.
>> >>>>
>> >>>> Raised and detailed in
>> >>>> https://github.com/commons-rdf/commons-rdf/issues/66
>> >>>> .. adding this to the tests sounds crucial, and would help us later
>> >>>> when sorting this.
>> >>>>
>> >>>>
>> >>>> This is in no way a complete resolution. (New bugs would arise, e.g=
.
>> >>>> you could add a triple with a BlankNode and then not remove it
>> >>>> afterwards with the same arguments).
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>> On 22 March 2015 at 21:00, Peter Ansell <ansell.peter@gmail.com>
>> >>>> wrote:
>> >>>>
>> >>>>> +1
>> >>>>>
>> >>>>> Although it is not urgent to release a 1.0 version, it is urgent t=
o
>> >>>>> release (and keep releasing often) what we have changed since 0.0.=
2
>> >>>>> so
>> >>>>> we can start experimenting with it, particularly since I have
>> >>>>> started
>> >>>>> more intently on Sesame 4 in the last few weeks. Stians pull
>> >>>>> requests
>> >>>>> to change the BNode situation could wait until after 0.0.3 is
>> >>>>> released, at this point.
>> >>>>>
>> >>>>> Cheers,
>> >>>>>
>> >>>>> Peter
>> >>>>>
>> >>>>> On 21 March 2015 at 22:37, Andy Seaborne <andy@apache.org> wrote:
>> >>>>>
>> >>>>>> I agree with Sergio that releasing something is important.
>> >>>>>>
>> >>>>>> We need to release, then independent groups can start to build on
>> >>>>>> it.
>> >>>>>> We
>> >>>>>> have grounded requirements and a wider community.
>> >>>>>>
>> >>>>>> Andy
>> >>>>>>
>> >>>>>>
>> >>>>>> On 21/03/15 09:10, Reto Gm=C3=BCr wrote:
>> >>>>>>
>> >>>>>>>
>> >>>>>>> Hi Sergio,
>> >>>>>>>
>> >>>>>>> I don't see where an urgent agenda comes from. Several RDF APIs
>> >>>>>>> are
>> >>>>>>>
>> >>>>>> there
>> >>>>
>> >>>>> so a new API essentially needs to be better rather than done with
>> >>>>>>>
>> >>>>>> urgency.
>> >>>>
>> >>>>>
>> >>>>>>> The SPARQL implementation is less something that need to be part
>> >>>>>>> of
>> >>>>>>> the
>> >>>>>>> first release but something that helps validating the API
>> >>>>>>> proposal.
>> > We
>> >>>>>>> should validate our API against many possible usecases and then
>> > discus
>> >>>>>>> which are more important to support. In my opinion for an RDF AP=
I
>> >>>>>>> it
>> >>>>>>> is
>> >>>>>>> more important that it can be used with remote repositories over
>> >>>>>>>
>> >>>>>> standard
>> >>>>
>> >>>>> protocols than support for hadoop style processing across many
>> >>>>> machines
>> >>>>>>> [1], but maybe we can support both usecases.
>> >>>>>>>
>> >>>>>>> In any case I think its good to have prototypical implementation
>> >>>>>>> of
>> >>>>>>> usecases to see what API features are needed and which are
>> >>>>>>>
>> >>>>>> problematic. So
>> >>>>
>> >>>>> I would encourage to write prototype usecases where a hadoop style
>> >>>>>>> processing shows the need for exposed blank node ID or a prototy=
pe
>> >>>>>>>
>> >>>>>> showing
>> >>>>
>> >>>>> that that IRI is better an interface than a class, etc.
>> >>>>>>>
>> >>>>>>> At the end we need to decide on the API features based on the
>> > usecases
>> >>>>>>> they
>> >>>>>>> are required by respectively compatible with. But it's hard to s=
ee
>> > the
>> >>>>>>> requirements without prototypical code.
>> >>>>>>>
>> >>>>>>> Cheers,
>> >>>>>>> Reto
>> >>>>>>>
>> >>>>>>> 1.
>> >>>>>>>
>> >>>>>>> https://github.com/commons-rdf/commons-rdf/pull/48#
>> >>>> issuecomment-72689214
>> >>>>
>> >>>>>
>> >>>>>>> On Fri, Mar 20, 2015 at 8:30 PM, Sergio Fern=C3=A1ndez
>> >>>>>>> <wikier@apache.org>
>> >>>>>>> wrote:
>> >>>>>>>
>> >>>>>>> I perfectly understand what you target. But still, FMPOV still
>> >>>>>>> out
>> > of
>> >>>>>>>>
>> >>>>>>> our
>> >>>>
>> >>>>> urgent agenda. Not because it is not interesting, just because mor=
e
>> >>>>>>>> urgent
>> >>>>>>>> things to deal with. I think the most important think is to get
>> >>>>>>>>
>> >>>>>>> running
>> >>>>
>> >>>>> with what we have, and get a release out. But, as I said, we can
>> >>>>>>>>
>> >>>>>>> discuss
>> >>>>
>> >>>>> it.
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>> On 20/03/15 19:10, Reto Gm=C3=BCr wrote:
>> >>>>>>>>
>> >>>>>>>> Just a little usage example to illustrate Stian's point:
>> >>>>>>>>>
>> >>>>>>>>> public class Main {
>> >>>>>>>>> public static void main(String... args) {
>> >>>>>>>>> Graph g =3D new
>> >>>>>>>>> SparqlGraph("http://dbpedia.org/sparql");
>> >>>>>>>>> Iterator<Triple> iter =3D g.filter(new Iri("
>> >>>>>>>>> http://dbpedia.org/ontology/Planet"),
>> >>>>>>>>> new
>> >>>>>>>>> Iri("http://www.w3.org/1999/02/22-rdf-syntax-ns#type
>> >>>>>>>>> "),
>> >>>>>>>>> null);
>> >>>>>>>>> while (iter.hasNext()) {
>> >>>>>>>>> System.out.println(iter.next().getObject());
>> >>>>>>>>> }
>> >>>>>>>>> }
>> >>>>>>>>> }
>> >>>>>>>>>
>> >>>>>>>>> I think with Stian's version using streams the above could be
>> >>>>>>>>> shorter
>> >>>>>>>>> and
>> >>>>>>>>> nicer. But the important part is that the above allows to use
>> >>>>>>>>>
>> >>>>>>>> dbpedia as
>> >>>>
>> >>>>> a
>> >>>>>>>>> graph without worrying about sparql.
>> >>>>>>>>>
>> >>>>>>>>> Cheers,
>> >>>>>>>>> Reto
>> >>>>>>>>>
>> >>>>>>>>> On Fri, Mar 20, 2015 at 4:16 PM, Stian Soiland-Reyes <
>> >>>>>>>>>
>> >>>>>>>> stain@apache.org>
>> >>>>
>> >>>>> wrote:
>> >>>>>>>>>
>> >>>>>>>>> I think a query interface as you say is orthogonal to Reto'=
s
>> >>>>>>>>>
>> >>>>>>>>>>
>> >>>>>>>>>> impl.sparql module - which is trying to be an implementation =
of
>> > RDF
>> >>>>>>>>>> Commons that is backed only by a remote SPARQL endpoint. Thu=
s
>> >>>>>>>>>> it
>> >>>>>>>>>> touches on important edges like streaming and blank node
>> >>>>>>>>>> identities.
>> >>>>>>>>>>
>> >>>>>>>>>> It's not a SPARQL endpoint backed by RDF Commons! :-)
>> >>>>>>>>>>
>> >>>>>>>>>>
>> >>>>>>>>>>
>> >>>>>>>>>> On 20 March 2015 at 10:58, Sergio Fern=C3=A1ndez <wikier@apac=
he.org>
>> >>>>>>>>>>
>> >>>>>>>>> wrote:
>> >>>>
>> >>>>>
>> >>>>>>>>>> Hi Reto,
>> >>>>>>>>>>>
>> >>>>>>>>>>> yes, that was a deliberated decision on early phases. I'd ne=
ed
>> >>>>>>>>>>> to
>> >>>>>>>>>>>
>> >>>>>>>>>> look
>> >>>>
>> >>>>> it
>> >>>>>>>>>>> up, I do not remember the concrete issue.
>> >>>>>>>>>>>
>> >>>>>>>>>>> Just going a bit deeper into the topic, in querying we are
>> > talking
>> >>>>>>>>>>>
>> >>>>>>>>>> not
>> >>>>
>> >>>>>
>> >>>>>>>>>>> only
>> >>>>>>>>>>
>> >>>>>>>>>> about providing native support to query Graph instance, but
>> >>>>>>>>>> also
>> >>>>>>>>>>> to
>> >>>>>>>>>>>
>> >>>>>>>>>>> provide
>> >>>>>>>>>>
>> >>>>>>>>>> common interfaces to interact with the results.
>> >>>>>>>>>>>
>> >>>>>>>>>>> The idea was to keep the focus on RDF 1.1 concepts before
>> >>>>>>>>>>> moving
>> >>>>>>>>>>> to
>> >>>>>>>>>>>
>> >>>>>>>>>>> query.
>> >>>>>>>>>>
>> >>>>>>>>>> Personally I'd prefer to keep that scope for the first
>> >>>>>>>>>> incubator
>> >>>>>>>>>>> release,
>> >>>>>>>>>>> and then start to open discussions about such kind of thread=
s.
>> > But
>> >>>>>>>>>>>
>> >>>>>>>>>> of
>> >>>>
>> >>>>>
>> >>>>>>>>>>> course
>> >>>>>>>>>>
>> >>>>>>>>>> we can vote to change that approach.
>> >>>>>>>>>>>
>> >>>>>>>>>>> Cheers,
>> >>>>>>>>>>>
>> >>>>>>>>>>>
>> >>>>>>>>>>>
>> >>>>>>>>>>> On 17/03/15 11:05, Reto Gm=C3=BCr wrote:
>> >>>>>>>>>>>
>> >>>>>>>>>>>
>> >>>>>>>>>>>> Hi Sergio,
>> >>>>>>>>>>>>
>> >>>>>>>>>>>> I'm not sure which deliberate decision you are referring to=
,
>> >>>>>>>>>>>> is
>> >>>>>>>>>>>> it
>> >>>>>>>>>>>> Issue
>> >>>>>>>>>>>> #35 in Github?
>> >>>>>>>>>>>>
>> >>>>>>>>>>>> Anyway, the impl.sparql code is not about extending the API
>> >>>>>>>>>>>> to
>> >>>>>>>>>>>>
>> >>>>>>>>>>> allow
>> >>>>
>> >>>>> running queries on a graph, in fact the API isn't extended at all.
>> >>>>>>>>>>>> It's
>> >>>>>>>>>>>>
>> >>>>>>>>>>>> an
>> >>>>>>>>>>>
>> >>>>>>>>>>
>> >>>>>>>>>>
>> >>>>>>>>>> implementation of the API which is backed by a SPARQL
>> >>>>>>>>>> endpoint.
>> >>>>>>>>>>>
>> >>>>>>>>>> Very
>> >>>>
>> >>>>>
>> >>>>>>>>>>>>
>> >>>>>>>>>>>> often
>> >>>>>>>>>>>
>> >>>>>>>>>>
>> >>>>>>>>>>
>> >>>>>>>>>> the triple store doesn't run in the same VM as the client an=
d
>> >>>>>>>>>> so
>> >>>>>>>>>>>
>> >>>>>>>>>> it is
>> >>>>
>> >>>>>
>> >>>>>>>>>>>> necessary that implementation of the API speak to a remote
>> > triple
>> >>>>>>>>>>>> store.
>> >>>>>>>>>>>> This can use some proprietary protocols or standard SPARQL,
>> >>>>>>>>>>>> this
>> >>>>>>>>>>>>
>> >>>>>>>>>>> is
>> >>>>
>> >>>>> an
>> >>>>>>>>>>>> implementation for SPARQL and can thus be used against any
>> > SPARQL
>> >>>>>>>>>>>> endpoint.
>> >>>>>>>>>>>>
>> >>>>>>>>>>>> Cheers,
>> >>>>>>>>>>>> Reto
>> >>>>>>>>>>>>
>> >>>>>>>>>>>>
>> >>>>>>>>>>>>
>> >>>>>>>>>>>>
>> >>>>>>>>>>>> On Tue, Mar 17, 2015 at 7:41 AM, Sergio Fern=C3=A1ndez <
>> >>>>>>>>>>>>
>> >>>>>>>>>>> wikier@apache.org>
>> >>>>
>> >>>>> wrote:
>> >>>>>>>>>>>>
>> >>>>>>>>>>>> Hi Reto,
>> >>>>>>>>>>>>
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>> thanks for updating us with the status from Clerezza.
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>> In the current Commons RDF API we delivery skipped queryin=
g
>> >>>>>>>>>>>>> for
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>> the
>> >>>>
>> >>>>>
>> >>>>>>>>>>>>> early
>> >>>>>>>>>>>>
>> >>>>>>>>>>>
>> >>>>>>>>>>
>> >>>>>>>>>> versions.
>> >>>>>>>>>>>
>> >>>>>>>>>>>>
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>> Although I'd prefer to keep this approach in the initial
>> >>>>>>>>>>>>> steps
>> >>>>>>>>>>>>> at
>> >>>>>>>>>>>>> ASF
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>> (I
>> >>>>>>>>>>>>
>> >>>>>>>>>>>
>> >>>>>>>>>>
>> >>>>>>>>>> hope we can import the code soon...), that's for sure one of
>> >>>>>>>>>> the
>> >>>>>>>>>>>
>> >>>>>>>>>> next
>> >>>>
>> >>>>>
>> >>>>>>>>>>>>> points to discuss in the project, where all that experienc=
e
>> >>>>>>>>>>>>> is
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>> valuable.
>> >>>>>>>>>>>>
>> >>>>>>>>>>>
>> >>>>>>>>>>
>> >>>>>>>>>>
>> >>>>>>>>>>> Cheers,
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>> On 16/03/15 13:02, Reto Gm=C3=BCr wrote:
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>> Hello,
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> With the new repository the clerezza rdf commons previous=
ly
>> >>>>>>>>>>>>>> in
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>> the
>> >>>>
>> >>>>> commons
>> >>>>>>>>>>>>>> sandbox are now at:
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> https://git-wip-us.apache.org/repos/asf/clerezza-rdf-core=
.git
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> I will compare that code with the current status of the
>> >>>>>>>>>>>>>> code
>> > in
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>> the
>> >>>>
>> >>>>> incubating rdf-commons project in a later mail.
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> Now I would like to point to your attention a big step
>> >>>>>>>>>>>>>> forward
>> >>>>>>>>>>>>>> towards
>> >>>>>>>>>>>>>> CLEREZZA-856. The impl.sparql modules provide an
>> > implementation
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>> of
>> >>>>
>> >>>>> the
>> >>>>>>>>>>>>>> API
>> >>>>>>>>>>>>>> on top of a SPARQL endpoint. Currently it only supports
>> >>>>>>>>>>>>>> read
>> >>>>>>>>>>>>>> access.
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> For
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>
>> >>>>>>>>>>
>> >>>>>>>>>> usage example see the tests in
>> >>>>>>>>>>>
>> >>>>>>>>>>>>
>> >>>>>>>>>>>>>> /src/test/java/org/apache/commons/rdf/impl/sparql (
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> https://git-wip-us.apache.org/repos/asf?p=3Dclerezza-rdf-=
core.
>> >>>>>>>>>>>>>> git;a=3Dtree;f=3Dimpl.sparql/src/test/java/org/apache/com=
mons/
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> rdf/impl/sparql;h=3Dcb9c98bcf427452392e74cd162c08a
>> >>>> b308359c13;hb=3DHEAD
>> >>>>
>> >>>>> )
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> The hard part was supporting BlankNodes. The current
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>> implementation
>> >>>>
>> >>>>> handles
>> >>>>>>>>>>>>>> them correctly even in tricky situations, however the
>> >>>>>>>>>>>>>> current
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>> code
>> >>>>
>> >>>>> is
>> >>>>>>>>>>>>>> not
>> >>>>>>>>>>>>>> optimized for performance yet. As soon as BlankNodes are
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>> involved
>> >>>>
>> >>>>> many
>> >>>>>>>>>>>>>> queries have to be sent to the backend. I'm sure some
>> >>>>>>>>>>>>>> SPARQL
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>> wizard
>> >>>>
>> >>>>> could
>> >>>>>>>>>>>>>> help making things more efficient.
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> Since SPARQL is the only standardized methods to query RD=
F
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>> data, I
>> >>>>
>> >>>>>
>> >>>>>>>>>>>>>> think
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>
>> >>>>>>>>>>
>> >>>>>>>>>> being able to fa=C3=A7ade an RDF Graph accessible via SPARQL=
is an
>> >>>>>>>>>>>
>> >>>>>>>>>>>>
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> important
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>
>> >>>>>>>>>>
>> >>>>>>>>>> usecase for an RDF API, so it would be good to also have an
>> > SPARQL
>> >>>>>>>>>>>
>> >>>>>>>>>>>>
>> >>>>>>>>>>>>>> backed
>> >>>>>>>>>>>>>> implementation of the API proposal in the incubating
>> >>>>>>>>>>>>>> commons-rdf
>> >>>>>>>>>>>>>> repository.
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> Cheers,
>> >>>>>>>>>>>>>> Reto
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> --
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>> Sergio Fern=C3=A1ndez
>> >>>>>>>>>>>>> Partner Technology Manager
>> >>>>>>>>>>>>> Redlink GmbH
>> >>>>>>>>>>>>> m: +43 660 2747 925
>> >>>>>>>>>>>>> e: sergio.fernandez@redlink.co
>> >>>>>>>>>>>>> w: http://redlink.co
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>> --
>> >>>>>>>>>>> Sergio Fern=C3=A1ndez
>> >>>>>>>>>>> Partner Technology Manager
>> >>>>>>>>>>> Redlink GmbH
>> >>>>>>>>>>> m: +43 660 2747 925
>> >>>>>>>>>>> e: sergio.fernandez@redlink.co
>> >>>>>>>>>>> w: http://redlink.co
>> >>>>>>>>>>>
>> >>>>>>>>>>>
>> >>>>>>>>>>
>> >>>>>>>>>>
>> >>>>>>>>>> --
>> >>>>>>>>>> Stian Soiland-Reyes
>> >>>>>>>>>> Apache Taverna (incubating), Apache Commons RDF (incubating)
>> >>>>>>>>>> http://orcid.org/0000-0001-9842-9718
>> >>>>>>>>>>
>> >>>>>>>>>>
>> >>>>>>>>>>
>> >>>>>>>>> --
>> >>>>>>>> Sergio Fern=C3=A1ndez
>> >>>>>>>> Partner Technology Manager
>> >>>>>>>> Redlink GmbH
>> >>>>>>>> m: +43 660 2747 925
>> >>>>>>>> e: sergio.fernandez@redlink.co
>> >>>>>>>> w: http://redlink.co
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>
>> >>>>>>
>> >>>>
>> >>>>
>> >>>> --
>> >>>> Stian Soiland-Reyes
>> >>>> Apache Taverna (incubating), Apache Commons RDF (incubating)
>> >>>> http://orcid.org/0000-0001-9842-9718
>> >>>>
>> >>>>
>> >>>
>> >>
--=20
Stian Soiland-Reyes
Apache Taverna (incubating), Apache Commons RDF (incubating)
http://orcid.org/0000-0001-9842-9718
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment