Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Propose better JSON-LD processing text #1302

Closed
wants to merge 30 commits into from
Closed
Show file tree
Hide file tree
Changes from 7 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions common.js
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,14 @@ var vcwg = {
status: 'CG-DRAFT',
publisher: 'Credentials W3C Community Group'
},
// todo: replace all references to RDF-NORMALIZATION, with this.
'RDF-CANON': {
OR13 marked this conversation as resolved.
Show resolved Hide resolved
title: 'RDF Dataset Canonicalization',
href: 'https://www.w3.org/TR/rdf-canon/',
authors: ['Dave Longley', 'Gregg Kellogg', 'Dan Yamamoto'],
status: 'WG-DRAFT',
publisher: 'RDF Dataset Canonicalization and Hash Working Group'
},
OR13 marked this conversation as resolved.
Show resolved Hide resolved
'DEMOGRAPHICS': {
title: 'Simple Demographics Often Identify People Uniquely',
href: 'https://dataprivacylab.org/projects/identifiability/paper1.pdf',
Expand Down
108 changes: 78 additions & 30 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -4240,47 +4240,54 @@ <h2>HTTP</h2>
</section>

<section class="informative">
<h2>JSON Processing</h2>

<h2>JSON-LD Processing</h2>
<p>
While the media types describing conforming documents defined in this
specification always express JSON-LD, JSON-LD processing is not required to be
performed, since JSON-LD is JSON. Some scenarios where processing a
<a>verifiable credential</a> or a <a>verifiable presentation</a> as JSON is
specification always express JSON-LD, RDF processing is not required to be
performed, since JSON-LD is a concrete RDF syntax as described in [RDF11-CONCEPTS].
Hence, a JSON-LD document is both an RDF document and a JSON document and correspondingly represents an instance of an RDF data model.
See <a href="JSON-LD11##relationship-to-rdf">Relationship to RDF</a> for more details.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The notion of "RDF processing" is not defined, and it can mean many things. Better not go there. Besides, it took me some time to understand what (I think) you meant in this section... What about a rewrite of the whole paragraph along the lines of:

While the media types describing conforming documents defined in this specification all express JSON-LD, developers should be aware that JSON-LD is a serialization of the abstract RDF Model [[RDF11-CONCEPTS]]. Hence, a JSON-LD document is both a JSON document and the representation of an abstract RDF Dataset. See Relationship to RDF for more details.

Also, in view of what you write later, I have the feeling that you have in mind two type of "processing":

  • "RDF Processing" when you transform the JSON-LD into RDF using the JSON-LD rules
  • "JSON-LD Processing" when you operate on the JSON level only.

if that is correct, this may be the place to define these, because otherwise the text below is very unclear. That being said, I do not like the two terms; for me, "JSON-LD Processing" means to perform the processing steps defined in the JSON-LD 1.1 Processing Algorithms and API specification, and this seems to contradict this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

^ this is correct, my goal is to basically say this:

  1. You need to care about JSON-LD
  2. You can care about it and ignore the context, you will have a bad time.
  3. You can care about it and pay attention to the context, and W3C or any other 3rd party context host can cause you to have a bad time (by changing the context after you commit to it).
  4. In the case that all contexts are immutable and always available, you can still have equivocation, but at least its limited to differences between RDF and JSON, not RDF and JSON-LD (where there will be NO equivocation in this very special case).

Copy link
Contributor

@dlongley dlongley Oct 14, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can care about it and ignore the context, you will have a bad time.

No, this is not a thing. You never "ignore" @context. You can hard-code your application to only work with some specific contexts, that's fine. But you don't "ignore" @context; that is not at all the same thing. Perhaps this is part of a core misunderstanding here?

The comments about context immutability are a different kind of issue. We can have text here that says: "if you're going to hard-code to specific contexts, they must be immutable in order to guarantee the same interpretation as someone who doesn't hard-code". That is a simple fact and encouraging the use of immutable contexts for VCs is a good thing, IMO.

</p>
OR13 marked this conversation as resolved.
Show resolved Hide resolved
<p>
Some scenarios where processing a <a>verifiable credential</a> or a <a>verifiable presentation</a> as RDF is
desirable include, but are not limited to:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Some scenarios where processing a <a>verifiable credential</a> or a <a>verifiable presentation</a> as RDF is
desirable include, but are not limited to:
Some scenarios where limiting processing to a limited set of expected
<a>verifiable credentials</a> or a <a>verifiable presentations</a> is
desirable include, but are not limited to:

Copy link
Contributor Author

@OR13 OR13 Oct 13, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd take "Some scenarios where limiting RDF processing"... but I won't take removal of RDF entirely.

</p>

<ul>
<li>
Before securing or after verifying content
that requires <a href="https://csrc.nist.gov/glossary/term/data_integrity">data
integrity</a>, such as a
<a>verifiable credential</a> or <a>verifiable presentation</a>.
Before securing or after verifying <a>verifiable credentials</a> or <a>verifiable presentations</a>,
in cases where the issuer has leveraged the <code>@context</code> to provide details regarding
their intended expression of the <a href="#verifiable-credential-graphs">Verifiable Credential Graphs</a>.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is not clear what "intended expression of the Verifiable Credential Graphs" means, or how this is different from "Minimal JSON-LD Processing" (which is what we should rename the JSON Processing section to, IMHO).

Perhaps we should have these sections: "JSON-LD Processing", which contains two sub-sections: "Processing without a JSON-LD Library" and "Processing as RDF"?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this suggestion, I will update to it

OR13 marked this conversation as resolved.
Show resolved Hide resolved
</li>
<li>
When performing JSON Schema validation, as described in Section
<a href="#data-schemas"></a>.
After performing JSON Schema validation, as described in Section
<a href="#data-schemas"></a>, in order to mitigate attacks
on the canonicalization process as described in <a href="https://w3c.github.io/rdf-canon/spec/#dataset-poisoning">Dataset Poisoning</a>
OR13 marked this conversation as resolved.
Show resolved Hide resolved
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two comments on the same section

  • I am not sure why JSON schema is relevant in this respect. I guess the issue is that if the issuer decided to use the RDFC 1.0 algorithm for its security related processing, and that is completely orthogonal whether he/she also uses JSON schemas. The two are unrelated
  • RDFC 1.0 takes care of the dataset positing issue, so "mitigate may not be appropriate here. The bullet point may be:
Suggested change
After performing JSON Schema validation, as described in Section
<a href="#data-schemas"></a>, in order to mitigate attacks
on the canonicalization process as described in <a href="https://w3c.github.io/rdf-canon/spec/#dataset-poisoning">Dataset Poisoning</a>
In order to fine tune the <a href="https://www.w3.org/TR/rdf-canon/#canon-algo-algo">defense against the dataset poisoning issue</a> built-in the RDFC 1.0 algorithm. [[RDF-CANON]]

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I saw now @dlongley's comment that the whole bullet point should be removed in view of the recent changes in RDFC; I would be o.k. with that, too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think its better to give direct security advice here, conforming documents are JSON, schema validation can protect against poisoning... telling people to read the RDF spec without spelling this out is harmful to the security posture of our document.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dataset poisoning is an issue in general but, I have the impression, it is very rarely an issue with v. credentials or presentations, and the text we are writing here relates to those. (It has been my suspicion for a long time that a VC/VP dataset is, from an RDF point of view, "well-behaved" 90% of the time, which means that the "tricky" part of the RDFC 1.0 algorithm, that may run amok, is not applied in the first place. But I cannot prove that.) The VCDM spec does not even talk about defining a blank node explicitly, something that is usually done to create poisoned datasets.

(Let alone the fact that, per RDFC 1.0 spec, each implementation may have its own way of defending itself against poisoned datasets, whose parameters (that the user may fiddle with) are therefore also implementation dependent. The RDFC standard only stipulates that a defense must exist and have a reasonable default parameter setting; the test suite does include a few poisoned datasets that the implementation should pass.)

As a conclusion, I do not believe that going to the details of the RDFC 1.0 is really an issue for this text.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@OR13, please respond to this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Decline to implement feedback, but perhaps a stronger rewarding regarding the security impact would be acceptable

</li>
<li>
When serializing or deserializing <a>verifiable credentials</a> or
<a>verifiable presentations</a> into systems that store or index their contents.
<a>verifiable presentations</a> into systems that store or index claims about subject in
<code>text/turtle</code>, <code>application/n-quads</code>, <code>application/ld+json</code>.
OR13 marked this conversation as resolved.
Show resolved Hide resolved
OR13 marked this conversation as resolved.
Show resolved Hide resolved
OR13 marked this conversation as resolved.
Show resolved Hide resolved
</li>
<li>
When operating on <a>verifiable credentials</a> or <a>verifiable
presentations</a> in a software application, after verification or validation
is performed for securing mechanisms that require an understanding of
and/or processing of JSON-LD.
When validating <a>verifiable credentials</a> or <a>verifiable
presentations</a> in a software application, after verification
OR13 marked this conversation as resolved.
Show resolved Hide resolved
is performed for securing mechanisms that do not require an understanding
or processing of JSON-LD.
OR13 marked this conversation as resolved.
Show resolved Hide resolved
OR13 marked this conversation as resolved.
Show resolved Hide resolved
</li>
<li>
When an application chooses to process the media type using the `+json`
When an application chooses to process the media type using the `+ld+json`
structured media type suffix.
</li>
</ul>

<p>
That is, JSON processing is allowed as long as the document being consumed or
produced is a <a>conforming document</a>. If JSON processing is desired, an
implementer is advised to follow the following rule:
Naive JSON processing is allowed as long as the document being consumed or
OR13 marked this conversation as resolved.
Show resolved Hide resolved
iherman marked this conversation as resolved.
Show resolved Hide resolved
produced is a <a>conforming document</a>, however, this can lead to interoperability issues
and confusion about the specific information that has been secured between JSON-LD and RDF processors.
OR13 marked this conversation as resolved.
Show resolved Hide resolved
Copy link
Contributor

@dlongley dlongley Oct 5, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, "Naive JSON processing" is never allowed ... with any JSON. I guess I don't know what this means. When you are processing JSON ... or anything generally, you shouldn't be naive. There is also no special danger of confusion or deviation of interpretation from generalized processors if the rules are followed, so we should focus there.

An attempt at rewording:

Suggested change
Naive JSON processing is allowed as long as the document being consumed or
produced is a <a>conforming document</a>, however, this can lead to interoperability issues
and confusion about the specific information that has been secured between JSON-LD and RDF processors.
<a>Conforming documents</a> that are expressed using contexts and shapes that are
well-known to a consuming application can be consumed like any other JSON
document, that is, according to the rules of its associated specifications and interpreted
according to the <a>issuers</a> intent. <a>Issuers</a> might use additional
contexts in <code>@context</code> to provide details regarding the expression of
their <a>verifiable credential</a>, and consumers are advised to follow these rules
to ensure that the document matches their accepted contexts and document shapes:

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think your suggestion makes it clearer. The issuer is required to use @context, since it's required in conforming documents.

Perhaps you mean additional context beyond the v2 core context? can you adjust your suggestion to take that into account?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've made an adjustment to say "Issues might use additional contexts... and consumers are advised...".

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@OR13 please respond to this comment.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to define "shapes that are well-known", assuming some version of this PR will land:

#1320

I can accept this suggestion as it exists.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@OR13,

I can accept this suggestion as it exists.

Yes, please, thanks!

</p>

<p>
Implementers unfamilar with JSON-LD / RDF are advised:
OR13 marked this conversation as resolved.
Show resolved Hide resolved
</p>

<ul>
Expand All @@ -4297,19 +4304,60 @@ <h2>JSON Processing</h2>
implementing the rule above. This can ensure proper term identification,
typing, and order, when a JSON document is processed as JSON-LD.
</p>
<p>
However, the intention of the issuer is not preserved, unless the <code>@context</code>
OR13 marked this conversation as resolved.
Show resolved Hide resolved
is applied, and the resulting <a href="#verifiable-credential-graphs">Verifiable Credential Graphs</a>,
and their exact RDF types, and node and edge structure is understood by the verifier.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
However, the intention of the issuer is not preserved, unless the <code>@context</code>
is applied, and the resulting <a href="#verifiable-credential-graphs">Verifiable Credential Graphs</a>,
and their exact RDF types, and node and edge structure is understood by the verifier.

No, the intention of the issuer is preserved... because it's a conforming document. The issuer produced a conforming document, the verifier is consuming that conforming document.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, because in the case the issuer included structure in the context, and a verifier ignores the context, the verifier ignores the issuers intent.

The verifier has no way to know if the issuer meant for them to ignore the context or not.

Copy link
Contributor

@dlongley dlongley Oct 14, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, because in the case the issuer included structure in the context, and a verifier ignores the context, the verifier ignores the issuers intent.

The verifier has no way to know if the issuer meant for them to ignore the context or not.

No verifier can ignore @context; that is not conformant processing. You can only consume properties from contexts you understand. If that's only the core context, that's fine -- but you can't read properties from other contexts unless you understand them.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@OR13, thoughts?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Decline to implement feedback, although as I said in the other comments, while the current language is effective, it might be improved in a future editorial PR.

OR13 marked this conversation as resolved.
Show resolved Hide resolved
</p>

<p>
The rule above guarantees semantic interoperability between JSON and JSON-LD for
literal JSON keys mapped to URIs by the `@context` mechanism. While JSON-LD
processors will use the specific mechanism provided and can verify that all
terms are correctly specified, JSON-based processors implicitly accept the same
semantics without performing any JSON-LD transformations, but instead by
These rules alone do not guarantee semantic interoperability between JSON and JSON-LD for
literal JSON keys mapped to URIs by the `@context` mechanism.
OR13 marked this conversation as resolved.
Show resolved Hide resolved
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is true now with some clean up above and it's simpler:

Suggested change
These rules alone do not guarantee semantic interoperability between JSON and JSON-LD for
literal JSON keys mapped to URIs by the `@context` mechanism.
The rules above guarantee semantic interoperability between applications
that are hard-coded to specific contexts and applications that more
generally consume JSON-LD documents.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not true, but could be true modulo some defense language around contexts changing, and it would be nice to say it this way.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dlongley @msporny I am not intending to accept this suggestion, in the interest of getting this PR to mergable state, I suggest we resolve any threads that we feel don't need further discussion on the PR, if you need issue markers to resolve a thread, please file issues, and I will address all issues filed in a subsequent PR.

OR13 marked this conversation as resolved.
Show resolved Hide resolved
</p>
While some RDF processors will use the language features of JSON-LD and can verify that all
OR13 marked this conversation as resolved.
Show resolved Hide resolved
terms are correctly specified, JSON-LD processors explicitly accept the same
semantics without performing any RDF transformations, but instead by
iherman marked this conversation as resolved.
Show resolved Hide resolved
applying the above rules. In other words, the context in which the data exchange
happens is explicitly stated for both JSON and JSON-LD by using the same
mechanism. With respect to JSON-based processors, this is achieved in a
lightweight manner, without having to use JSON-LD processing libraries.
happens is explicitly stated for both RDF and JSON-LD by using the same
mechanism. With respect to JSON-LD-based processors, this is achieved in a
lightweight manner, without having to use RDF processing libraries.
Comment on lines +4334 to +4335
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
mechanism. With respect to JSON-LD-based processors, this is achieved in a
lightweight manner, without having to use RDF processing libraries.
mechanism. JSON-LD-based processors achieve this in a lightweight manner,
without having to use RDF processing libraries.

</p>
</section>
<section>
<h2>Advanced RDF Processing</h2>
Copy link
Member

@iherman iherman Oct 6, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we find an alternative subtitle? There is nothing "Advanced" in what is in the section, at least not from an RDF point of view...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd suggest we remove this subsection. It's not clear what "expressions of intention" would actually be. The closing paragraph in this section also states that...

Verifiers that do not understand or process JSON-LD, will not be aware of these differences, and confusion over the intention of the issuer and the holder could lead to unexpected processing behavior by verifiers.

The claims being made in the separate graphs are no more "separate" conceptually in JSON than in RDF and any "intention" toward their use would be on the Verifiers (or whomever) to "unite" as/when desired.

The other items about id/type and value ordering introduce more confusion than value in their current form. Folks familiar with processing RDF out of JSON-LD will know about aliasing and value ordering, and (as @iherman mentions) there's nothing "advanced" about it...just par for the course when using term aliasing and graph data.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The intention was to clearly communicate that there is "JSON-LD processing" where you don't apply the context (This was previously called "JSON Processing"... and there is JSON-LD processing, where you do apply the context (this is currently titled "Advanced RDF Processing".).

I'm not attached to the names, I am attached to signaling that both processing rules are in the context of RDF.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would not call the first "JSON-LD Processing", because it is very misleading. The term "JSON-LD Processing" is a term coined by the JSON-LD WG and has a well-defined meaning. It is actually closer to what you call "Advanced RDF Processing".

What we describe in the spec handling VC files in JSON-LD format but without a full JSON-LD Processing is, though fairly general, proprietary to the VC specification. Maybe using something like "Basic VC Processing" would do.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Manu suggested calling in RDF processing, I think thats a fair compromise... its the "JSON-LD... but with transformations that care about effort put into the context" processing kind...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dlongley suggested "Static Processing" vs. "Dynamic Processing"... where "Static Processing" covers things like "Use a JSON Schema to check the well-formed-ness of the payload and then process it like you would a JSON payload... and where "Dynamic Processing" is using a JSON-LD processor to compact/expand or convert to NQuads.

Thoughts?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure about the "Dynamic" vs. "Static". I do not lie down on the road over this, but these two terms do not speak to me in this context.

<p>
Issuers should be aware that while conforming documents are expressed as compact JSON-LD,
OR13 marked this conversation as resolved.
Show resolved Hide resolved
not all Holders or Verifiers will understand expressions of intention that are only visible
after RDF processing has occured.
OR13 marked this conversation as resolved.
Show resolved Hide resolved
Comment on lines +4342 to +4343
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
not all Holders or Verifiers will understand expressions of intention that are only visible
after RDF processing has occured.
some <a>holders</a> and <a>verifiers</a> will not understand expressions of
intention that are only visible after RDF processing has occured.

</p>
<p>
Some of the best features of JSON-LD are only possible by applying its language features to the
terms defined in the <code>@context</code>.
Comment on lines +4346 to +4347
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Some of the best features of JSON-LD are only possible by applying its language features to the
terms defined in the <code>@context</code>.
Some of the most powerful features of JSON-LD are only accessible by applying
its language features to the terms mapped in the <code>@context</code>.

</p>
<p>
Implementers should be aware that <code>id</code> and <code>type</code> have been
OR13 marked this conversation as resolved.
Show resolved Hide resolved
aliased to <code>@id</code> and <code>@type</code>, and are marked <code>@protected</code>.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While this statement is correct, the usage of @protected is way more far-reaching than that: it is used to protect all terms define in this specification. That is what should be emphasized, not only these two imho.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, do you you have any concrete text, or maybe its better for us to just refer to https://www.w3.org/TR/json-ld11/#protected-term-definitions ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the JSON-LD spec alone is too terse for those who would really dive into this section. Something like

"Implementers should also be aware that all terms, defined in this specification, are marked as @protected [link to the JSON-LD spec], meaning that subsequent, application specific context files would not be able to change them."

I leave the details to native Anglo-Saxon speakers... :-)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we addressed this, and we'll need to resolve things that are addressed so I can process the remaining blockers. can you confirm?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where is this addressed? The text has not changed...

To be more precise: in §6.1.1 of the spec, the role of @protected is indeed explained (in the last bullet), so is the fact that @id and @type are aliased. In fact, this paragraph does not add any information that would not be explicitly stated in §6.1.1. Maybe the best is to remove it altogether.

Depending on the software libraries used, attempts to redefine these terms may raise processing errors.
OR13 marked this conversation as resolved.
Show resolved Hide resolved
Implementers interested in understanding what <code>@id</code> means
should review <a href="https://www.w3.org/TR/json-ld11/#node-identifiers">Node Identifiers</a>, and
OR13 marked this conversation as resolved.
Show resolved Hide resolved
<a href="https://www.w3.org/TR/json-ld11/#specifying-the-type">Specifying the Type</a>.
</p>
<p>
Implementers should be aware that object member, and array element order are not preserved by default in JSON-LD,
OR13 marked this conversation as resolved.
Show resolved Hide resolved
and as such, Issuers intending to communicate array order need to leverage the langauge features of JSON-LD.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Implementers should be aware that object member, and array element order are not preserved by default in JSON-LD,
and as such, Issuers intending to communicate array order need to leverage the langauge features of JSON-LD.
Implementers are advised that object member, and array element order are not preserved by default in JSON-LD,
and as such, Issuers intending to communicate array order need to leverage the langauge features of JSON-LD.

OR13 marked this conversation as resolved.
Show resolved Hide resolved
See <a href="https://www.w3.org/TR/json-ld11/#sets-and-lists">Value Ordering</a>.
</p>
<p>
Implementers should be aware that while naive JSON interpretation of claims might make it appear that claims are related,
the <code>https://www.w3.org/ns/credentials/v2</code> leverages <code>@container</code> and <code>@graph</code> to separate information graphs.
In particular, to separate the graphs related to <a>verifiable credentials</a>, <a>verifiable presentations</a>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK there is no property marked as "@container":"@graph" for VP-s. Only credentials and proofs are "separated" in their own graphs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What those mean is that a Verifiable Credential, resp. a proof, is put into a separate graph. You are right that this means, in practice, that the Verifiable Presentation that refers to the credential is therefore kept in its own (default) graph, but the sentence is nevertheless misleading.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its more misleading to ignore that RDF transformations make this distinction but JSON-LD (Without context processing) does not... how do you suggest we proceed?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove the reference to verifiable presentation. References to proofs and to verifiable credentials result in separate graphs, as you say in the text, and that is it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@OR13, thoughts?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is thread is actionable.

The context controls the shape of the RDF, not processing it (the latest version published by the authority, such as W3C) will lead to inconsistent processing.

I suggest we resolve this thread, or file seperate issues for follow up

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The context controls the shape of the RDF, not processing it (the latest version published by the authority, such as W3C) will lead to inconsistent processing.

I do not understand this statement.

and <code>data integrity proofs</code>.
OR13 marked this conversation as resolved.
Show resolved Hide resolved
</p>
<p>
Issuers and Holders might apply these same techniques to unambigiously communicate their intention regarding structured claims in
<a>verifiable credentials</a> and <a>verifiable presentations</a>. Verifiers that do not understand or process JSON-LD, will not
be aware of these differences, and confusion over the intention of the issuer and the holder could lead to unexpected processing behavior by verifiers.
OR13 marked this conversation as resolved.
Show resolved Hide resolved
OR13 marked this conversation as resolved.
Show resolved Hide resolved
</p>

</section>
</section>

Expand Down